Showing posts with label Computer. Show all posts
Showing posts with label Computer. Show all posts

Meaning Of Windows 16Bit | 32Bit | 64Bit |128Bit



DEFINITION OF 16Bit - 32Bit - 64Bit - 124Bit
If 10 years ago, at that time, the popularity of 16-bit applications began shifting with 32-bit applications. So now it's time to start 32-bit 64-bit resignation replaced. The presence of operating system Windows 64-bit desktop version, Windows XP Professional x64 Edition, further ensure that it is time to start switching to 64-bit. Some argue, Windows XP Professional x64 is not a revolutionary product. It is the result of an evolution, which also never happened before. And indeed it was time, considering it is so long, since the availability of 64-bit processor on the market.

What is 64-bit? 

To you who are still wondering, what is the meaning of 64-bit can read the following explanation. If not, you can proceed to the next section. In the processor, the number of bits the length or amount of data that can be processed directly in one step. Like the 32-bit CPU, meaning that the processor can process a 32 bit long instruction in one clock cycle. So the 64-bit processor is a CPU capable of processing capacity throughout the 64-bit instructions in one clock cycle. Output data that have been fully processed the CPU will then be inserted into the memory. By increasing the length of the data capabilities that can be processed the CPU, then indirectly also improves memory performance.

64-bit processor in the x86 Processor Architecturex86 architecture that is the basic design for desktop processors from Intel (and Intel compatible). Starting from 8088 until the era of Intel's Intel Pentium 4 and Intel Pentium D. Also for AMD until AMD Athlon class. Initially, AMD 64-bit processor is referred to as x86-64 extensions. But in 2003, AMD changed the name and call it AMD64. For the first time to draft a proposed 64-bit on x86 architecture started by AMD. Reference technical documentation, start provided by him starting from August 2000. It is intended for software developers to customize and optimize the formatting commands available on AMD64. Intel EM64T is based on AMD64 architecture. The fundamental difference in the two is the specific commands that are only possessed by an Intel processor. As Hyper-Threading technology (HT) or SSE3 instructions. IA64 is the term used by Intel on the Intel Itanium processor architecture and Intel Itanium 2. In contrast to the AMD64 and Intel EM64T are made based on the x86 architecture. IA64 has only limited compatibility with x86.

So, What Is It 128-bit RAM? 

32-bit processor is already working with 64-bit memory modules. This refers to the work done data bus between the processor and RAM. Some of the newest systems, with support from the motherboard northbridge chipset, enabling the processor and RAM can work on a faster data bus. Utilizing dual-channel design. By the manufacturer, this is often referred to as 128 -bit. Although the actual naming is not entirely correct. What is the maximum amount of RAM that can be Used in Windows 64-bit?Maximum amount of RAM that can be exploited by a system depends on three things. Ie processor, motherboard, and operating system. Most motherboards latest chipset, supporting up to 4GB. While most applications with 32-bit Windows operating system, will only access memory up to a maximum of 4 GB. This is specifically for Windows XP.

Probably most of PC users, has not been utilizing the maximum amount of RAM capacity up to 4 GB. However, the number of 4 GB is becoming a disturbing limitation to the use of a PC workstation. As in the use of design CAD / CAM, image manipulation, and high-end video. And of course do not miss the game, which will also take advantage of this. In Windows XP Professional x64 Edition, the maximum limit to 128 GB of RAM. With a capacity to handle virtual memory size up to 16TB (terabytes). With the presence of 64-bit era, with Application Usage Instructions How Long As FP x87, MMX, 3DNow!, And SSE?x87 architecture floating point addition is separate from the processor. Some knew him as a math co-processor chips. Some of them we know it is 8087, 80 287, 80 387, or 487SX. Start on an Intel 486DX, Pentium, and so on, x87 is available in the built-in processor. X87 The main task is to do high-precision calculations. As in the application of CAD (computeraided design) and spreadsheet applications. Especially for x87 has been deemed no longer effective. So on AMD64 architecture, the task is replaced by the so-called "flat register file", which has a total of 16 entries.


For instructions SSE2 (Streaming SIMD Extensions 2), AMD64 keep it. SSE2 can be used, both for the calculation of 32-bit applications and 64-bit. SSE2 is faster than relying on x87. SSE2 also been mengakomadasi instruction in MMX and 3DNow! It further allows the AMD64 and Intel EM64T work more optimally. Good for 32-bit applications, as well as 64-bit applications.


Which Windows Operating System Already Supports 64-bit computing? 

For the desktop, the operating system that already supports 64-bit from Microsoft is Windows XP Proffesional x64 Edition. Actually, Microsoft also has several other operating systems already support 64-bit technology. However, it is not entirely devoted to the needs of desktop PCs. Call it like Windows Server 2003, Standard x64 Edition; Windows Server 2003, Enterprise x64 Editon; Windows Server 2003, Datacenter x64 Edition. Some versions of Microsoft's operating system also supports 64-bit technology. Like Windows XP 64-bit Edition 2003 (discontinue); Windows Server 2003, Enterprise Editon with SP1 for Itanium-based systems Intel CPUs. However, the OS is not compatible with 64-bit processor for the desktop, such as AMD64 and EM64T.

For desktop users, most likely will refer to the operating system Microsoft Windows XP Professional x64 Edition. It became available in May 2005. If you berminta know more, can be seen in http://www.microsoft.com/windowsxp/64bit/default.mspx.


As Is Windows XP Professional x64 Edition? 

Windows x64 Edition are still using the same user interface compared to Windows XP Professional Edition. Some of the upgrades made to support 64-bit processor for the desktop with the AMD64/EM64T architectures. The increase primarily on the ability of these operating systems handle the memory. More can be seen in the table. Windows XP Professional x64 Edition is made in the same codebase with Windows XP Service Pack 2. So also with the additional facilities available. As the availability of support for wireless devices, Windows Firewall, Windows Security Center, a Bluetooth infrastructure, Power Management, and support for. NET Framework 1.1.

How The fate of 32-bit applications? 

Considering there are many applications that are still running on 32-bit basis, this would be a pretty important question. Operating system Windows x64-bit is still possible to run 32-bit applications in 64-bit Windows environment. Backward compatibility is limited to 32-bit applications. And 64-bit OS is not able to run 16-bit applications or MS-DOS. It should be noted also available installer file. If lets say the file setup.exe is still in the format 16-bitinstaller, the installation process will not be done. What about games? Some popular games are still running in 32-bit applications can still run. This is possible because of the availability of such a Program Compatibility Wizard (PCW). In Windows XP x64 Edition is referred to as Windows on Windows 64 (WOW64). Form a sub-system emulator working layer 32-bit applications in 64-bit OS environment. This is to ensure the two are not mutually.

This is also done to prevent the collision between the two. One of them by separating the use of DLLs (Dynamic Link Library) between the two. 32-bit applications will not be able to access 64-bit DLLs. And vice versa.


Is There a Performance Improvement between 32-bit and 64-bit?

If you expect miracles increase PC performance, using the OS and software 64-bit, then you will be disappointed. No significant performance improvements between 64-bit and 32-bit. At least with the use of PCs and applications are available now. So what benefits can be obtained from the 64-bit? 32-bit Windows can only allocate a maximum of 4 GB of RAM and 2 ~ 3GB address space (depending on application). Compare with x64 Edition. He was able to handle up to a maximum of 128 GB of RAM and 8 TB addressspace for every process application.

Physical memory and address space larger allows 64-bit applications to access memory more freely. Reduce the paging file to the hard drive, and indirectly will improve overall performance.As for speed, most likely there is no significant improvement. Given the processor clock speed which is used on 64-bit applications similar to those used during run 32 - bit on AMD64 and EM64T processors.


What about 64-bit Application Development. 

Need to Wait Long Availability of 64-bit applications? Up to now, the availability of apalikasi that works natively on 64-bit OS is still limited. But with the launch of Intel's Pentium D processors for desktops that are equipped with EM64T instructions, will certainly help fast this. Likewise, the availability of Windows XP Professional x64 Edition. It is estimated that the end of 2005, we began to see applications that work natively on 64-bit.With the increasingly widespread availability of 64-bit processor for the desktop, and comes with the launch of the operating system Windows XP Professional Edition, then it started popping up applications that already support 64-bit computing. For games this is also true. One game that has been optimized 64-bit is a game titled "Shadow Ops: Red Mercury". Utilizing the optimization of memory and improved multimedia capabilities possessed 64-bit operating system. Will be much improvement can be felt. Starting from the detailed picture that can be displayed, both the texture and detail the background, until the increase in AI (artificial intelligence).

=================================== Like -- Join Us and Help UsSearching for

Meaning Of Expert judgment

Expert judgment is an expression on one’s or group’s opinions for finding solutions and their response are either based on their experience or knowledge or both. Anyone who has worked on a large company appreciates its importance on making good decisions. Project managers must not hesitate to ask or consult experts on different topics such as what methodology to follow or programming language to use and so on.


Expert Judgment is use for
four situations which require recourse to expert judgment by [Lannoy & Procaccia, 2001]:
  • completing, validating, interpreting and integrating existing data; assessing the impact of a change,
  • predicting the occurrence of future events and the consequences of a decision,
  • determining the present state of knowledge in one field,
  • providing the elements needed for decision-making in the presence of several options.
The uncertainty of data in Expert Judgment
Expert judgment depends on experts (knowledge, experience, motivation,…), the state of knowledge on the topic and the dialogue between experts and analyst. So, according to Cooke, the most important tool in using expert judgment is the representation of uncertainty [Cooke, 1991].
Actors in expert judgement methods


The are two kinds of actors in an expert judgement method:
  • the experts, Ballay defines the expert as being the "person who has the knowledge" [Ballay, 1997],
  • and the analyst who carry on the expert judgement exercise.
 Open Image New Tab - To Real size


Advantages and disadvantages of using Expert Judgment
Expert judgment uses the experience and knowledge of experts to estimate the cost of a software project. An advantage of this method is the experience from past projects that the expert brings to the proposed project. The expert also can factor in project impacts caused by new technologies, applications, and languages. Examples of popular expert judgment techniques include the Delphi and Wideband Delphi methods. Expert judgment techniques are suitable for assessing the differences between past and future programs; and are especially useful for new or unique programs for which no historical precedent exists. However, the expert's biases and sometimes insufficient knowledge may create difficulties. It can be hard to document the factors used by the expert who contributes to the estimate. Although Delphi techniques can help alleviate bias problems, experts are usually hard-pressed to accurately estimate the cost of a new software program. Therefore, while expert judgment models are useful in determining inputs to other types of models, they are not frequently used alone in software cost estimating.


Expert Judgment Tools
The first two methods using expert judgement were developed by the RAND Corporation in the United-States after Word War II [Cooke, 1991] they are Scenario Analysis and the Delphi method.
Scenario Analysis
Herman Kahn is regarded as the father of scenario analysis [Cooke, 1991]. In The Year 2000 [Kahn & Wiener, 1967], Kahn defines scenarios as hypothetical sequences of events constructed for the purpose of focusing attention on causal processes and decision-points. They answer two kinds of questions:
  • Precisely how might some hypothetical situation come about, step by step ?
  • What alternatives exists, for each actor, and each step, for preventing diverting, or facilitating the process.
          The method as applied in projecting the year 2000 works basically as follows. The analyst first identifies what he takes to be the set of basic long-terms trends. These trends are then extrapolated into the future, taking account of any theoretical or empirical knowledge that might impinge on such extrapolations. The result is termed the surprise-free scenario. The surprise-free scenario serves as a foil for defending alternative futures or canonical variations. Roughly speaking, these are generated by varying key parameters in the surprise-free scenario.
          Scenario analysis can also be used to illuminate "wild cards." For example, analysis of the possibility of the earth being struck by a large celestial object (a meteor) suggests that whilst the probability is low, the damage inflicted is so high that the event is much more important (threatening) than the low probability (in any one year) alone would suggest. However, this possibility is usually disregarded by organizations using scenario analysis to develop a strategic plan since it has such overarching repercussions.
Scenario planning is a useful way of challenging the assumptions you naturally tend to make about the situation in which your plans will come to fruition. By building a few alternative scenarios, you can foresee more unknowns that may come to pass, and therefore you will be able to plan measures to counteract or mitigate their impact.


The Delphi method
             The Delphi method was developed at the RAND corporation in the early 1950s as a spin-off of an Air Force-sponsored research project, “Project Delphi”. The original project was designed to anticipate an optimal targeting of U.S. industries by a hypothetical Soviet strategic planner. In the middle 1960s and early 1970s the Delphi method found a wide variety of applications, and by 1974 the number of Delphi studies has exceeded 10,000 [Linstone & Turoff, 1975].
            The Delphi method has undergone substantial evolution and diversification. The method was developed by mathematicians and engineers, and enjoyed considerable popularity among research managers, policy analysts, and corporate planners in the late 1960s early 1970s. By the middle of 1970s psychometricians, people trained in conducting controlled experiments with humans, began taking serious look at the Delphi methods and results. According to Cooke [Cooke, 1991], the most significant study in this regard is Sackman’s Delphi Critique (1975). As a result the whole question of evaluating expert opinion and developing methodological guidelines for its use has moved into the foreground. The Delphi exercises seem to have disappeared, and play almost no role in contemporary discussions of expert opinion.
The basic idea of the Delphi method is as follows:
  • create a list of statements/questions
  • have the experts give their ratings/answers/etc.
  • make a report - send it out to everyone
  • have the experts revise their answers
  • make the second report
  • These make Delphi method as the best tool in securing Expert Judgment.


========================== Like -- Join Us and Help Us
Searching for

Meaning of Sudoku and Trick Fast win

Web Game Category - Base computer Heuristic Algorithm
How to Play Sudoku - Fast and Easy

1- Explanation
Meaning Sudoku 数独 sÅ«doku The objective is to fill a 9×9 grid with digits so that each column, each row, and each of the nine 3×3 sub-grids that compose the grid (also called "boxes", "blocks", "regions", or "sub-squares") contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which typically has a unique solution.
Completed puzzles are always a type of Latin square with an additional constraint on the contents of individual regions. For example, the same single integer may not appear twice in the same 9x9 playing board row or column or in any of the nine 3x3 subregions of the 9x9 playing board.
The puzzle was popularized in 1986 by the Japanese puzzle company Nikoli, under the name Sudoku, meaning single number. It became an international hit in 2005
 
Although the 9×9 grid with 3×3 regions is by far the most common, variations abound. Sample puzzles can be 4×4 grids with 2×2 regions; 5×5 grids with pentomino regions have been published under the name Logi-5; the World Puzzle Championship has featured a 6×6 grid with 2×3 regions and a 7×7 grid with six heptomino regions and a disjoint region. Larger grids are also possible. The Times offers a 12×12-grid Dodeka sudoku with 12 regions of 4×3 squares. Dell regularly publishes 16×16 Number Place Challenger puzzles (the 16×16 variant often uses 1 through G rather than the 0 through F used in hexadecimal). Nikoli offers 25×25 Sudoku the Giant behemoths. Sudoku-zilla, a 100x100-grid was published in print in 2010.

Another common variant is to add limits on the placement of numbers beyond the usual row, column, and box requirements. Often the limit takes the form of an extra "dimension"; the most common is to require the numbers in the main diagonals of the grid also to be unique. The aforementioned Number Place Challenger puzzles are all of this variant, as are the Sudoku X puzzles in the Daily Mail, which use 6×6 grids.

Mini Sudoku

A variant named "Mini Sudoku" appears in the American newspaper USA Today and elsewhere, which is played on a 6×6 grid with 3×2 regions. The object is the same as standard Sudoku, but the puzzle only uses the numbers 1 through 6.

Cross Sums Sudoku

Another variant is the combination of Sudoku with Kakuro on a 9 × 9 grid, called Cross Sums Sudoku, in which clues are given in terms of cross sums. The clues can also be given by cryptic alphametics in which each letter represents a single digit from 0 to 9. An excellent example is NUMBER+NUMBER=KAKURO which has a unique solution 186925+186925=373850. Another example is SUDOKU=IS*FUNNY whose solution is 426972=34*12558.

Killer Sudoku


Example of a Killer Sudoku problem.

Solution to the example above.
.
The same example problem, as it would be printed in black an white.


Killer sudoku (also killer su doku, sumdoku, sum doku, addoku, or samunamupure) is a puzzle that combines elements of sudoku and kakuro. Despite the name, the simpler killer sudokus can be easier to solve than regular sudokus, depending on the solver's skill at mental arithmetic; the hardest ones, however, can take hours to crack.

A Killer Sudoku puzzle
Solution for puzzle to the left
A Sudoku puzzle grid with four blue qudrants and nine rows and nine columns that intersect at square spaces. Some of the spaces are pre-filled with one number each; others are blank spaces for a solver to fill with a number.
Hypersudoku puzzle
The previous puzzle, solved with additional numbers that each fill a blank space.
Solution numbers for puzzle to the left
A Wordoku puzzle
Solution in red for puzzle to the left































How to Fast win - My Two Trick

Why another tutorial? Because you don't really need to know many tricks. I show this using a relatively hard puzzle by Wayne Gould, who creates puzzles for "The Times" of London. These are rated in difficulty from mild (the simplest) to fiendish (the one on the left). Gould claims that none of his puzzles ever need trial and error solutions. If you follow this example through you will find that you never really need very complicated tricks either. Another way of solving this very puzzle is given by Roger Walker in one of his tutorials. Our methods differ: I try to illustrate some often-used tricks in this example.

Step 1: Singletons: find the "loner"

"When you have eliminated the impossible whatever remains, however improbable, is the truth", said Sherlock Holmes. This is the principle by which we put the 3 in the top row. 1, 2 and 7 are eliminated by the clues in the row; 4, 5, 6 and 9 by those in the column, and 8 by the cell. This leaves the truth. I don't see it as very improbable; but one must give the master some poetic license. This rule may or may not be useful to begin things off, but it is indispensible in the end game (especially when it is coupled with the hidden loner rule of Step 8).

Step 2: Basic "slice and dice"

Let's see how to place a 4 in the bottom right cell. The blue lines show that it must go right into the bottom-most row, because the other two rows already have a 4 in them. These are the slices. Now one of the three squares in the bottom row of the cell already has a clue in it. The other square is eliminated by dicing. The green line shows that the middle column is ruled out, because it already contains a 4 in another cell. So we have finished the second move in a fiendish puzzle and found out what slicing and dicing is.

Step 3: Applied "slice and dice"

We can place two more 4s, shown in black in the picture on the left. This requires slice and dice exactly as before. Another example: we can place a 1 by slice and dice as shown in the picture on the right.

Step 4: Simple "hidden pairs"

Angus Johnson has this to say about hidden pairs: "If two squares in a group contain an identical pair of candidates and no other squares in that group contain those two candidates, then other candidates in those two squares can be excluded safely." In the example on the right, a 2 and a 3 cannot appear in the last column. So, in the middle rightmost cell these two numbers can only appear in the two positions where they are "pencilled in" in small blue font. Since these two numbers have to be in these two squares, no other numbers can appear there.

Step 5: "Locked candidates"

Angus Johnson again: "Sometimes a candidate within a cell is restricted to one row or column. Since one of these squares must contain that specific candidate, the candidate can safely be excluded from the remaining squares in that row or column outside of the cell." Since the hidden pair 2 and 3 prevent anything else from apearing in the first two columns of the middle rightmost cell, an 8 can only appear in the last column. Now we apply the locked candidates rule.
We want to place an 8 in the bottom right cell. The last column can be sliced out by the locked candidate rule. Other slicing and dicing is normal, leading to the placement of the 8 as shown.

 

Step 6: Bootstrap by extending the logic of "locked candidates"

To get to the first step of the bootstrap from the last picture shown above, we need to slice and dice to place an 8 in the center bottom cell. You must be an expert at this method by now, so I leave that in your capable hands.
The first element of the bootstrap is to place 8s in the middle row of cells. The picture on the left shows where the 8s must be placed in the middle left cell. The picture on the right shows the placement in the central cell.
Next we extend the logic of the locked candidates. The 5th and 6th rows must each have an 8: one of them has it in the middle left cell, and the other in the central cell. Therefore the 8 in the middle right cell cannot be in either of these rows. From what we knew before, the 8 must be in the top right corner square of the cell, as shown in the picture on the right. This is almost magical. Putting together imprecise information in three different cells, we have reached precise information in one of the cells.
And now the final step of the bootstrap is shown in the picture on the left. The placement of the 8 dictates that the 6 must be just below it, and therefore the 7 in the remaining square. The diabolical magic is complete: reason enough for this to be classified as a fiendish puzzle. One of Roger Walker's tutorials is a solution of precisely this puzzle, by a different route. But before going there, I invite you to try your hand at completing the solution which we have started upon here.

Step 7: The beginning of the end

The worst is over. We are now truly into the end game. First complete cell C entirely by the "loner" trick: filling 6, 5, 3 and 7 in that order. Next complete the cell F. Then finish the 7th column, place the 5 in the cell D, and complete that row, in order to get the picture on the left. We are more than half done. From now on common sense prevails: fill things in one by one. Don't panic, there are no sharks circling the boat. No swordfish either.

Step 8: "Hidden loner": almost not worth naming

The last rule, I promise. And it is hardly one, although you could call it the "hidden loner" rule. The only reason one should give it a name is that it fixes this very useful method in one's mind. So here is the example: In the 6th row there's more than one choice in each square. However there is only one place where the 5 can go (it is excluded from the squares with X's in them). So there is a loner hidden in this row: hence the name. I stop here, but you can go on to solve a fiendish puzzle by the simplest tricks exclusively.

Not so fiendish?

Mike Godfrey wrote to me to point out a much simpler way of solving this particular puzzle. After step 3, as before, one can fill in the 6 shown in blue in the figure here, by noting that all other numbers can be eliminated by requiring that they do not appear in the same row, column or block. After this the remaining puzzle can be solved by spotting singles.
Mike writes that this puzzle "is not too fiendish perhaps". Perhaps. But that opens up the question of how to rate puzzles. I haven't found much discussion of this aspect of the mathematics of Su Doku: partly because commercial Su Doku generators (by that I mean the humans behind the programs) are not exactly forthcoming about their methods, but also because the problem is not terribly well-defined. This is a wide open field of investigation.

From tricks to methods: the roots of mathematics

 

Constraint programming

The minimum Su Doku shown alongside (only 17 clues) requires only two tricks to solve: identifying hidden loners and simple instances of locked candidates. The key is to apply them over and over again: to each cell, row and column. The application of constraints repeatedly in order to reduce the space of possibilities is called constraint programming in computer science. "Pencilling in" all possible values allowed in a square, and then keeping the pencil marks updated is part of constraint programming. This point has been made by many people, and explored systematically by Helmut Simonis.

Non-polynomial state space

This is where much of the counting appears. Before clues are entered into a M×M Su Doku puzzle, and the constraints are applied, there are MM2 states of the grid. This is larger than any fixed power of M (this is said to be faster than any polynomial in M). If depth-first enumeration were the only way of counting the number of possible Su Dokus, then this would imply that counting Su Doku is a hard problem. Application of constraints without clues is the counting problem of Su Doku. As clues are put in, and the constraints applied, the number of possible states reduces. The minimum problem is to find the minimum number of clues which reduces the allowed states to one. The maximum problem is analogous.
Many known hard problems are of a type called nondeterministic-polynomial. In this class, called NP, generating a solution of a problem of size M takes longer than any fixed power of M, but given a solution, it takes only time of order some fixed power of M to check it (ie, a polynomial in M). If enumeration were the only way of counting the number of Su Doku solutions, then this would be harder than NP. If someone tells me that the number of Su Doku solutions is 6670903752021072936960, I have no way to check this other than by counting, which I know to takes time larger than polynomial in M. At present there is no indication that the counting problem of Su Doku is as easy as NP.

Trial-and-error: is Su Doku an NP complete problem?

The Su Doku problem is to check whether there is an unique solution to a given puzzle: the yes/no answer would usually, but not necessarily, produce the filling of the grid which we call a solution. It would be in NP if the time an algorithm takes to solve the M×M Su Doku problem grows faster than any fixed power of M. It is not known whether the Su Doku problem is in NP.
One sure fire way of solving any Su Doku puzzle is to forget all these tricks and just blindly do a trial-and-error search, called a depth-first search in computer science. When programmed, even pretty sloppily, this can give a solution in a couple of seconds. If we use this method on M×M Super Doku, then the expected run time of this program on the trickiest puzzles (called worst-case in computer science) would grow faster than any fixed power of M, but (of course) it is guaranteed to solve the puzzle. If trial-and-error were the only algorithm to solve any Su Doku puzzle whatsoever, and one were able to show that the state space of a puzzle grows faster than a fixed power of M, then this would prove that Su Doku is an NP problem.
Helmut Simonis has results which might indicate that trial-and-error is never needed, and a small bag of tricks with hyper arc consistency always answers the Su Doku question. However, one needs to ask how many times the consistency check has to be applied to solve the worst-case problem, and how fast this grows with M, in order to decide whether constraint programming simplifies the solution.

The controversy over trial-and-error

From this formal point of view, one can see the debate raging currently on Michael Mepham's web site and other discussion boards on Su Doku as an argument between the search enthusiasts and the constraint programming wallahs: with Mepham slowly giving ground in his defence of search. But does the debate just boil down to choosing which algorithm to use? Yes, if the Su Doku problem is easy (ie, in P) and constraint programming solves it faster. However, if Su Doku is hard, then there is a little more to it.

Backdoors: defining "satisfactory puzzles"

In many instances of NP complete problems, the average run time of programs can be substantially less than the worst-case. Gomes and Selman conjecture that this is due to the existence of "backdoors", ie, small sets of tricks which solve these average problems. Here human intuition (called heuristics in computer science) can help to identify the backdoors and often crack the nut faster than the sledge hammer of systematic algorithms. These I call "satisfactory puzzles". One of the open problems for Su Doku is to define precisely the nature of such backdoors, and the classes of problems which contain them.

Zen and the art of gardening

We have introduced elsewhere a method of counting Su Dokus by a depth first enumeration of trees (called the garden of forking paths). It is clear that some of the branches of these trees are much longer than the average. As M grows, this imbalance also grows (polynomially, or faster?). This is one way of visualizing the difference between the average case (satisfactory puzzles) and the worst case (diabolical puzzles). My challenge is a gardening problem: how do you make the trees come out balanced and symmetric? It is like a Zen puzzle: if you solve it, then you reduce human intuition (heuristics) to an algorithm; even if it is impossible you gain insight by contemplating the problem.

....................................................... Like -- Join Us and Help
Us Searching for

What is Hosting ? Explain and Tutorial


What is Hosting? I'll explain it aHosting is the place or the internet service to create web pages that you created to be online and can be accessed by others.

According to Wikipedia Indonesia, is hosting are:
Hosting is the internet services that provide the resources to hire servers, allowing organizations or individuals to put information on the Internet in the form of HTTP, FTP, EMAIL or DNS. Server hosting consists of a combination of servers or a server that is connected with high-speed Internet network.

Types of Existing Hosting


There are several types of hosting services namely shared hosting, VPS or Virtual Dedicated Servers, dedicated servers, colocation servers.
Shared Hosting is using the same shared hosting server with other users of the servers used by more than one domain name. This means that in a single server, there are several accounts that distinguish between one and another account with a username and password.
VPS, Virtual Private Server, or also known as Virtual Dedicated Server is the process of virtualization of the operating system software environment used by the server. Because this environment is a virtual environment, it is possible to install the operating system that can run on other operating systems.
Dedicated Server is the use of a dedicated server for larger applications and can not be operated in a shared hosting or virtual dedicated server. In this case, provision is borne by the hosting company servers that typically work with the vendor.
Server Colocation is a service rental place to put the server used for hosting. Servers supplied by customers who typically work with the vendor.

Why Blogger Needs Hosting?


Yep, as a blogger must be sure you realize it or not you need a place to publish to the world of the internet. For example if you have a blog on WordPress.com, Blogger.com, Multiply.com, DagDigDug.com, and other blog services, you automatically have to use the hosting services they provide. But if you choose to self-hosted Bloggingly like this, then surely you must have hosting its own lease.

Choosing a Good Web Hosting Services

When you decide to have a blog or website hosting sebdiri, then you should be able to pick and choose a good web hosting services. What should you look for when choosing hosting for your blog or website is:
Your need for space and bandwidth. The more your writing, then the larger space that will be needed. More and more visitors to your blog, the greater the bandwidth required to prevent a server full load
Note the services and features of the place you will menghostingkan your blog or website. Can include any software that is in their hosting and support of their hosting services.
Target readers. If you choose a target audience of the country is better to use a local server only to better conserve bandwidth. But if you choose glogal target, it would not hurt you to choose a server abroad such as in America. But this situation is not absolute
The right price. Consult them more understanding about your hosting needs for services that you rent in accordance with the money you will spend.

How to Hire a Web Hosting

To hire a hosting service where you need to know hosting is available at your place or at least in Indonesia. Then you click on the registration of every main page hosting services.
Just follow the steps that are instructed as to choose a domain name, check availability of domain names that you want like in IDwebhost.com, and complete the payment.
If you are hosting an online usually you will get an email from the service provider or any officer chat who are online on the homepage.



..................................................... Like -- Join Us and Help Us

Searching for

The History Of Microsoft And Billgates





Stories Microsoft
When anyone hears the name Microsoft to think of one person: Bill Gates, the founder of  the company. It’s said that Bill Gates is one of the smartest programmers ever.  After reading an article on the Altair 8800 from the popular electronics magazine in 1975, Bill Gates called the creators of the Altair 8800, MITS, offering to demonstrate and importation of the BASIC programming language for the system. Gates had neither the Altair nor the interpreter. However, in only eight weeks, Bill and Paul Allen had created the interpreter. The interpreter worked without any glitches in the demo and MITS was located. On the basis of that, Microsoft was founded. 
The name came from microcomputer and software coming up with Microsoft. The Microsoft name was registered with the secretary of state of New Mexico on November 26, 1976. Microsoft’s first international office was in Japan and found it on November 1, 1978. The name of the international office was ASCII, which is now known as Microsoft Japan. In January of 1979 the company packed up and moved its headquarters to Bellevue Washington. Steve Ballmer teamed up with Microsoft in June of 1980. The company had to restructure in June of 1981 in order to become an incorporated business in its new home state of Washington. This is when they changed the name to Microsoft Inc. As part of the restructuring, Bill Gates became the President of the Company and the Chairman of the Board and Paul Allen became the Executive VP.
Microsoft released their first operating system in 1980. It was a variant of  Unix. AT&T acquired the system through a distribution license, calling it Xenix. They then hired Santa Cruz operation to help to port/adapt the operating system to several platforms. This variant would become home to the first version of Microsoft’s word processor. The company went on to produce several other programs after this one. However, the disk operating system also known as DOS was the one to bring them true success. In August of 1981 IBM warded a contract to Microsoft to provide a version of the CP/M. clone called 86 -- TOS. This deal went down for less than 50,000. IBM then renamed 86-DOS to PC-dos. They changed the name due to the fact of copyright infringement problems. IBM then marketed both CP/M. and PC-DOS. CP/M. was sold for $240 and PC-DOS was sold for $40. PC-DOS became the standard edition because of its lower price.
In 1983 Microsoft created their very first own home computer system. They named it MSX. MSX contained its own version of  die DOS operation system. This very own system became very popular in South America, Japan, and Europe. Later on, the market was flooded with IBM PC clones after Columbia data products successfully cloned the IBM BIOS. This very deal allowed IBM to have total control of its own QDOS and MS-DOS. Soon powering this. Microsoft rose to one of the major software vendors in the home computer industry. Microsoft released a program called Microsoft Mouse in May of 1983 expanding its product line and other markets. Ever since then Microsoft has been the biggest largest player in the industry for creating top-of-the-line software, such as their most famous product Windows.
In 2001 Microsoft entered the PC gaming world with their Xbox system. This was the first gaming console system to be released in the Gaming Market. The Xbox ranked second to Sony’s PlayStation 2. The console sold 24 million units compared to PlayStation 2 at 100 million units. The company took a $4 billion loss on the console.  It was then discontinued in late 2006. In May of 2005, Microsoft unveiled their Xbox 360 gaming console. The console had people standing out in the cold waiting to get their hands on one for hours. As soon as they hit the shelves they were sold out completely. As of January 20009  28,000,000 units have been sold worldwide.  Today the Xbox 360 is one of the hottest gaming system available on the market today


Stories Of  Bill Gates
Bill Gates was born on October 28, 1955 in a family having rich business, political and community service background. His great-grandfather was a state legislator and a mayor, his grandfather was vice president of national bank and his father was a lawyer.


Bill strongly believes in hard work. He believes that if you are intelligent and know how to apply your intelligence, you can achieve anything. From childhood Bill was ambitious, intelligent and competitive. These qualities helped him to attain top position in the profession he chose. In school, he had an excellent record in mathematics and science. Still he was getting very bored in school and his parents knew it, so they always tried to feed him with more information to keep him busy. Bill’s parents came to know their son's intelligence and decided to enroll him in a private school, known for its intense academic environment. It was a very important decision in Bill Gate's life where he was first introduced to a computer. Bill Gates and his friends were very much interested in computer and formed "Programmers Group" in late 1968. Being in this group, they found a new way to apply their computer skill in university of Washington. In the next year, they got their first opportunity in Information Sciences Inc. in which they were selected as programmers. ISI (Information Sciences Inc.) agreed to give them royalties whenever it made money from any of the group’s program. As a result of the business deal signed with Information Sciences Inc., the group also became a legal business.


Bill Gates and his close friend Allen started new company of their own, Traf-O-Data. They developed a small computer to measure traffic flow. From this project they earned around $20,000. The era of Traf-O-Data came to an end when Gates left the college. In 1973, he left home for Harvard University. He didn’t know what to do, so he enrolled his name for pre-law. He took the standard freshman courses with the exception of signing up for one of Harvard's toughest mathematics courses. He did well over there, but he couldn’t find it interesting too. He spent many long nights in front of the school's computer and the next day asleep in class. After leaving school, he almost lost himself from the world of computers. Gates and his friend Paul Allen remained in close contact even though they were away from school. They would often discuss new ideas for future projects and the possibility of starting a business one fine day. At the end of Bill's first year, Allen came close to him so that they could follow some of their ideas. That summer they got job in Honeywell. Allen kept on pushing Bill for opening a new software company.


Within a year, Bill Gates dropped out from Harvard. Then he formed Microsoft. Microsoft's vision is "A computer on every desk and Microsoft software on every computer". Bill is a visionary person and works very hard to achieve his vision. His belief in high intelligence and hard work has put him where he is today. He does not believe in mere luck or God’s grace, but just hard work and competitiveness. Bill’s Microsoft is good competition for other software companies and he will continue to stomp out the competition until he dies. He likes to play the game of Risk and the game of world domination. His beliefs are so powerful, which have helped him increase his wealth and his monopoly in the industry.
Bill Gates is not a greedy person. In fact, he is quite giving person when it comes to computers, internet and any kind of funding. Some years back, he visited Chicago's Einstein Elementary School and announced grants benefiting Chicago's schools and museums where he donated a total of $110,000, a bunch of computers, and provided internet connectivity to number of schools. Secondly, Bill Gates donated 38 million dollars for the building of a computer institute at Stanford University. Gates plans to give away 95% of all his earnings when he is old and gray.




................................................... Like -- Join Us and Help Us
Searching for

Journal- Journal of Clinical Virology

Abstrac
  • Background: Respiratory infections are the most common infectious diseases in humans worldwide and are a leading cause of death in children less than 5 years of age.
  • Objectives: Identify candidate pathogens in pediatric patients with unexplained respiratory disease.
  • Study design: Forty-four nasopharyngeal washes collected during the 2004–2005 winter season from pediatric patients with respiratory illnesses that tested negative for 7 common respiratory pathogens by culture and direct immunofluorescence assays were analyzed by MassTag-PCR. To distinguish human enteroviruses (HEV) and rhinoviruses (HRV), samples positive for picornaviruses were further characterized by sequence analysis.
  • Results: Candidate pathogens were detected by MassTag PCR in 27 of the 44 (61%) specimens that previouslywere rated negative. Sixteen of these 27 specimens (59%) contained picornaviruses; of these 9 (57%) contained RNA of a recently discovered clade of rhinoviruses. Bocaviruseswere detected in three patients by RT-PCR.
  • Conclusions: Our study confirms that multiplex MassTag-PCR enhances the detection of pathogens in clinical specimens, and shows that previously unrecognized rhinoviruses, that potentially form a species HRV-C, may cause a significant amount of pediatric respiratory disease.
Introduction

Respiratory infections are the most common infectious diseases in humans worldwide and a leading cause of death in children less than 5 years of age.1–3 In the United States of America, acute viral respiratory infections (ARIs) are a significant cause of morbidity and hospitalization in children.4,5 Identification of previouslyunknown respiratory pathogens may lead to the development of new therapies and vaccines to treat or prevent ARIs. Recently, a sensitive and highly multiplexed PCR platform, MassTag-PCR, was designed to address the limitations of conventional multiplex PCR.6 MassTag-PCR simultaneously detects 20–30

Full Link Download

http://www.ziddu.com/download/15072425/jurnalJournalofClinicalVirology.pdf.html

jurnal- Medium resolution transmission measurements of CO2 at high

Abstract
Medium resolution transmissivities of CO2 were measured at temperatures between 300 and 1550 K for the 4.3, 2.7 and 2:0 m bands. Measurements were made with a new drop tube design, which guarantees a truly isothermal high-temperature gas column. Data were collected with an FTIR-spectrometer, allowing for much better spectral resolution than most previous high-temperature measurements. The measured data were compared with two line-by-line and two narrow band databases. The data show some discrepancies with high-resolution databases at higher temperatures, indicating missing and=or incorrectly extrapolated spectral lines. ? 2002 Elsevier Science Ltd. All rights reserved.

Keywords: Radiative properties; Transmissivity; Narrowband; Carbon dioxide; High temperature

1. Introduction
Knowledge of radiative properties of combustion gases is required to accurately predict radiative >uxes in a number of physical systems like @res and combustion systems. Unfortunately, absorption coeBcients of absorbing gases are not known with suBcient accuracy to make reliable heat transfer calculations, especially at high temperatures. Gas spectra broadened by N2, air and other buDer gases have been studied by a number of investigators, some in the atmosphere and others in alaboratory setting. Atmospheric measurements are done using the absorbing gas that is present in the atmosphere. For example, Rinsland et al. [1] describe atmospheric measurements of water vapor properties using an FTIR spectrometer and a telescope. Atmospheric ozone measurements have been made by Bouazza et al. [2] and Flaud et al. [3]. Both these measurements were done with FTIR spectrometers. Farrenq et al. [4] have made atmospheric measurements of solar CO lines, also with an FTIR spectrometer. Atmospheric measurements have the advantage of long optical paths. However,

Link Full Download
http://www.ziddu.com/download/15072424/ediumresolutiontransmissionmeasurementsofCO2athigh.pdf.html

jurnal- Hybrid Transceiver Schemesfor Spatial

ABSTRAC :

In this article, we present hybrid multiple-input
multiple-output (MIMO) transceiver schemes (HMTS) that
combine transmit diversity and spatial multiplexing, thus
achieving at the same time the two possible spatial gains offered
by MIMO systems. For these transceivers, a modification in the
interference nulling-and-cancelling algorithm used in traditional
MIMO schemes is proposed. We propose a novel MIMO receiver
architecture to cope with the hybrid transmission schemes by
jointly performing the tasks of interference cancellation and
space-time decoding. Both successive and ordered successive
detection strategies are considered in the formulation of the
receivers. Our simulation results show satisfactory performance
of the HMTS when combined with the proposed receivers,
outperforming the standard vertical Bell laboratories layered
space-time system in terms of bit/symbol error rate, while
providing higher spectral efficiencies than a pure space-time
block code system.

Index Terms—MIMO, spatial multiplexing and diversity,
space-time coding, hybrid schemes, interference cancellation.

== FULL DOWNLOAD
http://www.ziddu.com/download/1507242/jurnalHybridTransceiverSchemesforSpatial.pdf.html

Resume Paper - Journal SPK pemilihan Tenaga kerja Dengan AHP - WEB

 
RANGKUM JURNAL

JUDUL
SISTEM PENGAMBILAN KEPUTUSAN DINAMIS PEMILIHAN CALON TENAGA KERJA DENGAN MENGGUNAKAN METODE AHP BERBASIS WEB
PENULIS
Ira Prasetyaningrum , Rengga Asmara ,dan Ahmad Farihin
PENERBIT
EEPIS Repository  08 Aug 2011
TUJUAN
Untuk memudahkan dan membantu manager SDM dalam proses penerimaan calon tenaga kerja dalam memutuskan pelamar mana yang akan diterima sebagai tenaga kerja perusahaan dalam
mencari tenaga kerja (Sumber Daya Manusia /SDM) yang berkualitas

LATAR BELAKANG
permasalahan yang ada yaitu untuk seleksi tenaga kerja belum sepenuhnya memanfaatkan tes secara tertulis, penilaian lebih banyak dilakukan melalui wawancara sehingga penilaian yang dibuat bersifat subjektif. Selain itu belum tersedia pendataan yang baik untuk mencatat data calon pegawai karena data yang dimiliki saat ini masih berupa dokumen tertulis

METODE
Metode Analytic Hierarchy Process (AHP)
dikembangkan oleh Thomas L. Saaty pada tahun 70 – an ketika di Warston school. Metode AHP merupakan salah satu metode yang dapat digunakan dalam system pengambilan keputusan dengan memperhatikan faktor – faktor persepsi, preferensi, pengalaman dan intuisi. AHP menggabungkan penilaian-penilaian dan nilai – nilai pribadi ke dalam satu cara yang logis.

Metode ini adalah sebuah kerangka untuk mengambil keputusan dengan efektif atas persoalan dengan menyederhanakan dan mempercepat proses pengambilan keputusan dengan memecahkan persoalan tersebut kedalam bagian – bagiannya, menata bagian atau variabel ini dalam suatu susunan hirarki Metode ini juga menggabungkan kekuatan dari perasaan dan logika yang bersangkutan pada berbagai persoalan, lalu mensintesis berbagai pertimbangan yang beragam menjadi hasil yang cocok

APLIKASI – EXPERIMENT
  1. Mendefenisikan masalah dan menentukan solusi yang diinginkan
  2. Membuat struktur hirarki yang diawali dengan tujuan umum, dilanjutkan dengan kriteria kriteria dan alternatif - alternatif pilihan yang ingin di rangking.
  3. Membentuk matriks perbandingan berpasangan yang menggambarkan kontribusi relatif atau pengaruh setiap elemen terhadap masing-masing tujuan atau kriteria yang setingkat diatas. Perbandingan dilakukan berdasarkan pilihan atau judgement dari pembuat keputusan dengan menilai tingkat-tingkat kepentingan suatu elemen dibandingkan elemen lainnya.
  4. Menormalkan data yaitu dengan membagi nilai dari setiap elemen di dalam matriks yang berpasangan dengan nilai total dari setiap kolom.
  5. Menghitung nilai eigen vector dan menguji konsistensinya, jika  tidak konsisten maka pengambilan data (preferensi) perlu diulangi. Nilai eigen vector yang dimaksud adalah nilai eigen vector maksimum yang diperoleh dengan menggunakan matlab maupun dengan manual.
  6. Mengulangi langkah, 3, 4, dan 5 untuk seluruh tingkat hirarki.
  7. Menghitung eigen vector dari setiap matriks perbandingan berpasangan. Nilai eigen vector merupakan bobot setiap elemen. Langkah ini untuk mensintetis pilihan dalam penentuan prioritas elemen pada tingkat hirarki terendah sampai pencapaian tujuan
  8. Menguji konsistensi hirarki. Jika tidak memenuhi dengan CR < 0,100 maka penilaian harusdiulangi kembali.
 
Tahapan – tahapan Aplikasi Program

1. Sistem yang menerapkan AHP pada permasalahan seleksi pegawai calon tenaga kerjadi instansi yang bersangkutan

2. Sistem bisa menyajikan informasi hasil seleksi pegawai calon tenaga kerja kepada pihak manajemen perusahaan dalam bentuk visual yang memudahkan untuk dipahami 

 
Pada Menu data calon tenaga kerja berfungsi untuk mengelola data calon tenaga kerja yang akan dievaluasi. Admin dapat melakukan edit , tambah maupun menghapus data.



 
KESIMPULAN
  • Pihak SDM terbantu dalam menentukan calon karyawan mana yang dapat diterima oleh perusahaan dengan menggunakan metode AHP.
  • Hasil perhitungan ahp dalam sistem ini sesuai dengan proses ahp secara manual
  • Kriteria bersifat dinamis sehingga dapat disesuaikan dengan permasalahan yang ada

EVALUASI – ANALISIS
Dalam Jurnal ini menggunakan Metode AHP dalam Proses SPK untuk Seleksi Karyawan yang Di ukur berdasar Data subjektif dan memberikan evaluasi hasil akir maksimal dari proses seleksi. Kekurangan adalah dalam wawancara kita bisa lebih mengenal karakter dari calon karyawan yang tidak bisa dinilai hanya dari sebuah test


REFRENSI
·
Subakti, Irfan. Sistem Pendukung Keputusan.

· Kusrini. 2007. Konsep Dan Aplikasi Sistem Pendukung Keputusan. ANDI. Yogyakarta

· Saaty. Decision Making With Analytic Hierarchy Process. Int. J. Services Sciences, Vol. 1, No. 1, 2008

· Bambang Eka Putra. Sistem Pendukung Penilaian Karyawan. Skripsi. Universitas Islam Indonesia. Yogyakarta.


............. Like -- Join Us and Help Us
Related Posts Plugin for WordPress, Blogger...