Location: Hyderabad, India


File Allocation Table

FAT, the MS-DOS file system is supported by most of today's OSes. Today FAT comes in three different flavours – FAT12, FAT16 and FAT32. The names refer to the number of bits used by the entries in table that gave the file system its name! The "File Allocation Table" itself is actually one of the structures inside the FAT file system as seen on-disk. The purpose of this table is to keep track of which areas of the disk are available and which areas are in use. Another important part about FAT is the "Long File Name" extension to FAT sometimes referred to as VFAT. The terms LFN and VFAT are closely related, but VFAT really means just Virtual FAT. FAT Overview It’s time to get slightly technical. I’ll first just mention all the structures almost in the order in which they usually appear inside the partition. When talking about the order of things I’m referring to order as seen through the Logical Block Address of a particular structure. Cluster This term is very fundamental for FAT. A cluster is a group of sectors on the FAT media. Only the part of the partition called the "data area" is divided into clusters. The rest of the partition is simply sectors. Files and directories store their data in these clusters. The size of one cluster is specified in a structure called the Boot Record and can range from a single sector to 128 sector(s). Boot Record All the three flavours of FAT have a Boot Record, which is located within an area of reserved sectors. The DOS format program reserves 1 sector for FAT12 and FAT16 and usually 32 sectors for FAT32. File Allocation Table The actual "File Allocation Table" structure is a relatively simple structure, as are all of the FAT structures really. The FAT is a simple array of 12-bit, 16-bit or 32-bit data elements. Usually there will be two identical copies of the FAT. There is a field in the Boot Record that specifies the number of FAT copies. With FAT12 and FAT16, MS-DOS uses only the first copy, but the other copies are kept in sync. FAT32 was enhanced to specify which FAT copy is the active one in a 4-bit value part of a "Flags" field. It’s quite common to think of the FAT as a singly linked list. Each of the chains in the FAT specify which parts of the disk belong to a given file or directory. Root Directory The Root Directory is formatted like any other directory except it does not contain the "dot" and "dot-dot" entries. See the details section for more information. The root directory can always be found immediately following the file allocation table(s) for FAT12 and FAT16 volumes. Data Area Time has come to describe the user data area. What is there to say really? The user data area (or just data area if you like) is where the contents of files and directories are stored. Simple as that… See the formulas above for how to calculate the size of the data area. And yes, the data area is divided into sector groups called clusters. All the clusters in a single FAT volume have the same size. To further educate you, the term slack space refers to any unused space at the end of a cluster and cannot be used by any other file or directory. Note that directories are not known to suffer from slack space problems. This is simply because the exact size in bytes of a directory is not recorded as with files and generally no one seem to care anyway. The data area section will not be explained in detail. There is simply nothing more to say about it. Information on how to access files and directories is the closest we get to data area details. Wasted Sectors If the number of data sectors is not evenly divisible by the cluster size you end up with a few wasted data sectors. Also if the partition as declared in the partition table is larger than what is claimed in the Boot Record the volume can be said to have wasted sectors. If you are not familiar with the term partition table, I suggest that you go to Hale Landis’ web site and look for the How It Works series of documents at -


Interesting Stuff

Google searches for science On 17 November, Google Inc. announced the addition of a new search engine, called Google Scholar, that points to Web pages containing documents such as peer reviewed papers, books, and technical reports. The search service, located at, will make it easier for scientists and other researchers to find articles and papers related to their fields. Though most scholarly papers are indexed on the Web, their contents are often not publicly available.

The new service also addresses the longstanding need of students and researchers in developing nations for access to up-to-date materials unavailable in conventional libraries. Anurag Acharya, an engineer at Google who led the project, told the New York Times that global access to research may spur innovation. “We don’t know where the next breakthrough will come from. We want everyone to be able to stand on the shoulders of giants.”

The project was made possible through the cooperation of scientific and technical publishers, including the IEEE, the Association of Computing Machinery, Nature, and the Online Computer Library Center.

The proof is in the printer Computer researchers have made it easier for sleuths to catch criminals who use laser printers to carry out their illicit deeds. A team from Purdue University in West Lafayette, Ind., proved that every printer has a unique signature based on the way it lays ink down on a page and has developed techniques for matching a document with a specific printer. Edward Delp, a Purdue professor who led the team, told BBC News that the team takes “mathematical features, or measurements, from printed letters, then [uses] image analysis and pattern recognition techniques to identify the printer.” In 11 out of 12 tests of this method, it successfully pointed to the printer used to create a document.

Explaining why all printers, which appear identical, are different, Professor Jan Allebach, a member of the team, said, “For a company to make printers all behave exactly the same way would require tightening the manufacturing tolerances to the point where each printer would be too expensive for consumers.”


Energy-Saving Screens

Processors and memory chips keep growing in capacity, but batteries don’t improve fast enough to keep up. So the only way to increase the battery life of mobile devices such as PDAs and smart phones is to reduce the amount of power they consume. Working with a new generation of displays based on organic light-emitting diodes (OLEDs), researchers at Hewlett-Packard have found a way to do that: dimming the parts of the screen that aren’t in use. “We have energy-aware central processors; why don’t we have energy-aware interfaces?” asks Parthasarathy Ranganathan, a senior research scientist at HP Labs. The prevailing approach to energy-saving displays—leaving the entire screen illuminated while a device is active but turning it off after a minute or two of inactivity—is less than ideal, since it uses a lot of energy when the screen’s on and, when it’s off, forces the user to push a button to return to his or her task. Instead, Ranganathan’s team developed special software that monitors a PDA’s screen when it’s in use and automatically dims the unimportant pixels—for example, everything in the background behind an active pop-up menu or dialogue box. The method is not effective with most of today’s standard liquid-crystal displays, which are illuminated by fluorescent bulbs that remain on even if a particular group of pixels is dark. But in OLED screens, each pixel emits its own light, so “if you turn off a pixel, you don’t have to spend power on it,” explains Ranganathan. Since phones and PDAs with OLED screens are expected to become commonplace within two years, the new software could soon be a standard feature of the operating systems of mobile devices.


What are cookies?

You may have noticed that Web sites are getting smarter by the day. They seem to "know" more and more about you each time you visit. For instance, you may bookmark a popular site such as the Amazon or CDnow, and find that the computer on the other end knows not only that you've been there before, but exactly when you last visited, and what you were looking at the last time you clicked by. Spooky, you say? Exciting? Perhaps a little of both? Most Web sites accomplish this stunning feat with HTTP cookies. A cookie is a small piece of information that's sent to your browser - along with an HTML page - when you access a particular site. When a cookie arrives, your browser generally saves this information to your hard drive; when you return to that site, some of the stored information will be sent back to the Web server, along with your new request. Sites with "shopping carts" are a good example of cookies in action: You browse a series of Web pages for items to buy, and when you find something you want, you "add it" to your shopping cart by clicking a button on the page. Later, you can view these items all together. The funny thing is, even though you're communicating through an "anonymous" connection, the site always knows exactly what's in your personal shopping cart. It doesn't seem to matter whether you've clicked away to somewhere else and come back, or even if you've completely shut down your computer and returned days later. The site still knows who you are, and what you were shopping for. But how? Cookies work their magic by expanding the abilities of HTTP, so it's hard to talk about one without first explaining the other. HTTP (hypertext transfer protocol) is a group of standards that cover the way Web pages, graphics, and other data should be transferred across the Net. In other words, it's the rules of the road. Every server and browser on the Web uses this standard to communicate. A small HTTP header is sent with each transaction, telling the receiving end exactly what it's getting. These headers communicate requests from browsers, as well as server responses. A normal HTTP response header looks something like this: HTTP/1.0 200 FoundDate: Wed, 30 Oct 1996 23:48:22 GMT Server: Apache/1.1.1 HotWired/1.0 Location: Content: text/webmonkey/html This header (or something like it) is sent with every single file that comes to you through the Web. So why haven't you noticed? Well, the information contained inside a cookie isn't displayed. In fact, a cookie is designed to be invisible to the user. Your browser is smart enough to strip off the information and just give you the page you're looking for. One of the limitations of HTTP is that it's a "stateless" connection. It works more or less like a vending machine: You push a button, and if everything checks out (i.e., you have correct change), it gives you what you want. The vending machine doesn't know anything about you, except that you ordered a root beer and it served you one. There's not a lot of information going back and forth. The HTTP cookie is an attempt to make regular HTTP a little smarter by including more information inside the HTTP header. By adding a "Set Cookie: ...." line to the HTTP header, the server can deliver cookie information to your browser. Your Web browser then saves this information and sends it back to the server the next time you visit the same site. Through this system, a kind of "persistent state" can be maintained, even though there's no ongoing communication between your browser and the cookie-setting server. This opens up some possibilities that Webmasters will make use of. For instance, if a site is looking to track the number of unique visitors over a period of time, the Webmaster will write a script that plants a cookie during the first visit. At subsequent visits, the script will see that the cookie is already there and will do nothing. This is a very simple example. A clever coder can use cookies to track user behavior over a period of time or to maintain a shopping cart.


Auto Animator

Animating a person’s movements for a movie or video game can be costly and time consuming, requiring that actors be filmed with special cameras for every step and shrug. A new tool cUniversity of Toronto, however, can extrapolate a person’s movements from a single sequence of motions. First, the sequence is used to train the system. Then the animator picks a new movement for the digital character by, say, changing the position of its hands and feet. The system then calculates the most probable corresponding positions of the rest of the body. reated by Zoran Popovic at the University of Washington and Aaron Hertzmann at the Popovic says that a clip of only 20 or 30 frames is enough information to give the system a good sense of how a person tends to move. Popovic imagines that the technology would be particularly useful for animators who make sports video games based on actual players. In fact, the technology is currently licensed to Redwood City, CA-based Electronic Arts, a maker of video games.