Thursday, February 26, 2009

When Energy Rhymes With Nanotechnology

Let's face it. Our computing devices are going faster year after year. But our laptop batteries don't show the same performance improvement. They still work only for a few hours, just a little bit more than ten years ago. Several companies want to change this, according to this UPI report, "Nanotechnology improving energy options." For example, mPhase Technologies plans to introduce smart batteries based on millions of silicon nanotube electrodes. These nanobatteries, to be introduced before the end of 2005, will last longer than traditional ones and will be respectful of our environment. Meanwhile, Konarka Technologies wants to reduce the weight of batteries with its flexible solar-fueled nanobatteries.

Let's start with the new battery nanotechnology from mPhase Technologies.
The company is seeking to develop a battery containing millions of silicon nanotube electrodes, sitting upright like a bed of nails. Atop each nanotube sits a droplet of electrolyte. The droplets rest atop the nanotubes without interacting, much like an Indian fakir can rest atop a nail bed. But when a voltage change pushes the droplets down into the spaces between the tubes, they react, causing current to flow. The droplet sits above the nanotubes with little or no interaction with the tubes themselves. But when it falls within the space between the tubes, they encounter a greatly increased surface area and interact with the tubes themselves, causing current to flow (Credit: mPhase Technologies).


"This can give them a very long storage life of years and years, by only activating when in use," explained Steve Simon, mPhase executive vice president for R&D. The silicon-based devices are compatible with semiconductor processes, are easy to miniaturize, have a quick ramp up to full power, are inexpensive to mass produce and have high power and energy density.
The nanobatteries also can contain droplets that can neutralize the often-toxic electrolytes when it comes time to dispose of them. "This green effect means when thrown away, it does not pollute the environment," Simon said.

Improving batteries performance is a good thing. Reducing their weight is another one. Do you know that special operations soldiers on battlefields can carry up to 70 pounds of batteries, or half of the weight of the quipment they have to bear? Konarka Technologies wants to reduce this.
Konarka Technologies of Lowell, Mass., makes plastic devices that absorb sunlight and indoor light and convert them into electrical energy.
The devices resemble gift-wrapping paper in their thinness and flexibility, and can be integrated into fabrics and roofs. They are made using nanoscale titanium dioxide particles coated in photovoltaic dyes. When light hits the dye, they generate electricity.

As you can see below, these photovoltaic materials can be incorporated into a wide range of products (Credit: Konarka Technologies).


"They're lightweight and flexible, more versatile than previous generations of solar cells," said Daniel McGahn, Konarka's executive vice president and chief marketing officer.

According to UPI, this company has serious backers, such as Electricité de France and ChevronTexaco.

For more information, the UPI report mentions other companies involved in batteries using nanotehnologies.

This New Radar Can See Through Walls

According to Haaretz, an Israeli start-up has developed a new radar technology to see through walls. This radar system, based on UWB (ultra wideband) technology, can produce 3D images of what stays behind walls. The real breakthrough is that this system can be used from a distance of up to 20 meters, which will benefit rescuers as well as military personnel by providing useful information about the number of people inside a room, their locations and even their weapons. The newspaper adds that the images are of good quality, allowing the users of the system to follow what is happening behind the wall in real time. However, don't expect to get one today. The first devices are expected to be available within 18 months.

Here are some excerpts.
a small, Herzliya-based company called Camero is offering a solution: a radar system, based on UWB (ultra wideband) technology, that can produce three-dimensional pictures of what lies behind a wall, from a distance of up to 20 meters. The pictures, which resemble those produced by ultrasound, are relatively high-resolution. Although the figures are somewhat blurred, the system enables the user to follow what is happening behind the wall in real time.
"The company was born of urgent operational needs," said CEO Aharon Aharon -- and not only those of the military. "When disaster victims must be rescued from a collapsed building or a fire, time is of the essence," he explained. "Rescue forces often invest enormous resources and precious time in combing the rubble, or endanger their lives by entering the flames, even if it is not clear that there are any survivors behind the walls."

And here are some details about the technology, which also uses special software.
Camero was born at the Jerusalem Global venture capital fund (JVG), when Amir Be'eri, a former defense establishment employee associated with the fund (his most recent position was CEO of Infineon), developed a way to emit UWB radio waves. UWB was a new technology at the time, and it was necessary because ordinary radio waves do not provide high enough resolution to be useful. Yet radio waves are necessary because other types of waves do not pass through walls.
Another problem with radio waves is that they do not function well around metal. However, Camero has developed sophisticated software that enables its technology to work even on steel-reinforced concrete walls.

Apparently, the first devices will be ready within 18 months, a period during which Camero's competitor, Time Domain, will be able to sell its own technology.
Time Domain, which also uses UWB technology to see through walls, has been active for six months and is already selling millions of dollars worth of devices a year. But Camero's technology is superior in several important respects. First, it can be used from a distance of 20 meters, whereas Time Domain's product must be right next to the wall in question. Second, it gives a detailed picture of everything in the room, whereas Time Domain's product locates objects, but gives no information about their shape or size.

In "Israeli invention sees through walls," WorldNetDaily gives some details about other defense technologies developed in Israel.
Israeli firms are well known for developing revolutionary technology, particularly in the defense fields. El Al Airlines recently implemented a high-tech antimissile system developed by an Israeli firm, and Israel announced it developed a Star Wars-like remote control border with Gaza that uses unmanned sensor patrol cars and computerized observation posts to automatically spot and, upon human authorization, kill terrorists, even recommending the most appropriate weapon for the system to fire against a specified target.
In addition, an Israeli security source told WND that Israel recently developed proprietary technology that can discreetly put an electronic field around a building or area that gives users the ability to monitor and control every electronic emission within that field, from electronic can openers to fax machines, computers and cell phones.

This New Radar Can See Through Walls

According to Haaretz, an Israeli start-up has developed a new radar technology to see through walls. This radar system, based on UWB (ultra wideband) technology, can produce 3D images of what stays behind walls. The real breakthrough is that this system can be used from a distance of up to 20 meters, which will benefit rescuers as well as military personnel by providing useful information about the number of people inside a room, their locations and even their weapons. The newspaper adds that the images are of good quality, allowing the users of the system to follow what is happening behind the wall in real time. However, don't expect to get one today. The first devices are expected to be available within 18 months.

Here are some excerpts.
a small, Herzliya-based company called Camero is offering a solution: a radar system, based on UWB (ultra wideband) technology, that can produce three-dimensional pictures of what lies behind a wall, from a distance of up to 20 meters. The pictures, which resemble those produced by ultrasound, are relatively high-resolution. Although the figures are somewhat blurred, the system enables the user to follow what is happening behind the wall in real time.
"The company was born of urgent operational needs," said CEO Aharon Aharon -- and not only those of the military. "When disaster victims must be rescued from a collapsed building or a fire, time is of the essence," he explained. "Rescue forces often invest enormous resources and precious time in combing the rubble, or endanger their lives by entering the flames, even if it is not clear that there are any survivors behind the walls."

And here are some details about the technology, which also uses special software.
Camero was born at the Jerusalem Global venture capital fund (JVG), when Amir Be'eri, a former defense establishment employee associated with the fund (his most recent position was CEO of Infineon), developed a way to emit UWB radio waves. UWB was a new technology at the time, and it was necessary because ordinary radio waves do not provide high enough resolution to be useful. Yet radio waves are necessary because other types of waves do not pass through walls.
Another problem with radio waves is that they do not function well around metal. However, Camero has developed sophisticated software that enables its technology to work even on steel-reinforced concrete walls.

Apparently, the first devices will be ready within 18 months, a period during which Camero's competitor, Time Domain, will be able to sell its own technology.
Time Domain, which also uses UWB technology to see through walls, has been active for six months and is already selling millions of dollars worth of devices a year. But Camero's technology is superior in several important respects. First, it can be used from a distance of 20 meters, whereas Time Domain's product must be right next to the wall in question. Second, it gives a detailed picture of everything in the room, whereas Time Domain's product locates objects, but gives no information about their shape or size.

In "Israeli invention sees through walls," WorldNetDaily gives some details about other defense technologies developed in Israel.
Israeli firms are well known for developing revolutionary technology, particularly in the defense fields. El Al Airlines recently implemented a high-tech antimissile system developed by an Israeli firm, and Israel announced it developed a Star Wars-like remote control border with Gaza that uses unmanned sensor patrol cars and computerized observation posts to automatically spot and, upon human authorization, kill terrorists, even recommending the most appropriate weapon for the system to fire against a specified target.
In addition, an Israeli security source told WND that Israel recently developed proprietary technology that can discreetly put an electronic field around a building or area that gives users the ability to monitor and control every electronic emission within that field, from electronic can openers to fax machines, computers and cell phones.

'Space-cooled' jackets down to Earth

Several technologies used to design the space suits protecting astronauts are now being adapted to protect workers facing extremely hot and dangerous conditions. According to the European Space Agency (ESA), these 'space-cooled' jackets are using three different technologies: special 3D-textile structure, cooling apparatus derived from astronauts' suits, and a special water-binding polymer acting as a coating. Even if these protective clothes are primarily intended for firefighters or steel workers, several applications are possible, such as in sportswear or in cars as parts of air conditioning systems.

Here are the goals of this project, explained by Stefano Carosio from the Italian company D'Appolonia.
"Through this project, named Safe&Cool, we are developing a special protective material with a built-in cooling system based on the technology developed for the space suits used by astronauts on the International Space Station to prevent them from overheating when exposed to direct sunlight during space walks."

And for this project, the members of the consortium used several technologies.
Firstly a special 3D-textile structure is used in the thermal and moisture management layer to replace the interliner and moisture barrier of classical three-layered protective clothing.
The second technology is the cooling apparatus derived from astronauts' suits. This enables liquid to be circulated through tubing inserted in cavities in the 3D-textile structure, creating 'blood vessels' for heat removal. A water-binding polymer is the third technology and this will be added either as a coating or in the form of a powder dispersed inside the fabrics.

Below is a picture showing the three technologies described above (Credit: Safe&Cool Project Consortium).



And the picture below shows a detail of the cooling apparatus derived from astronauts' suits: the cooling tubes are weaved into the improved textiles developed by the Safe&Cool project (Credit: Safe&Cool Project Consortium).



But what will be the usage of such suits?
Although the immediate application for the Safe&Cool innovative thermal management system is to create clothing to protect those working in harsh environments, such as firefighters and steel workers, several other promising applications have been identified by the consortium, including use in sportswear and transportation. The Polish company TAPS, which is part of the consortium, is already testing the industrial viability of inserting the system as heating or conditioning elements inside passenger seats in cars and public transport.

For other illustrations about this technology, you might want to check this page on the ESA Portal, from which the two images above have been picked. You also can look at a previous story featuring a cooling jacket for astronauts using the technology described above.

Ten Emerging Technologies That Will Affect Our Lives

This is the time of the year when Technology Review publishes its forecasts about ten emerging technologies which will change our world some day. This year's batch includes Bayesian machine learning, RNA interference or microfluidic optical fibers. But last year's list included injectable tissue engineering or nanoimprint lithography, which didn't really change the world in 2003. So read this list with a grain of salt.

Let's start with the introduction.
With new technologies constantly being invented in universities and companies across the globe, guessing which ones will transform computing, medicine, communication, and our energy infrastructure is always a challenge. Nonetheless, Technology Review’s editors are willing to bet that the 10 emerging technologies highlighted in this special package will affect our lives and work in revolutionary ways -- whether next year or next decade. For each, we’ve identified a researcher whose ideas and efforts both epitomize and reinvent his or her field.

Here is the full list.
Universal Translation, with Yuqing Gao, from IBM
Synthetic Biology, with Ron Weiss, from Princeton University
Nanowires, with Peidong Yang of the University of California, Berkeley
Bayesian Machine Learning, with Daphne Koller, from Stanford University
T-Rays, with Don Arnone, from Toshiba’s research labs in Cambridge, England
Distributed Storage, with Hari Balakrishnan, from the MIT
RNA Interference, with Thomas Tuschl, formerly from the Max Planck Institute for Biophysical Chemistry in Germany, and now at Rockefeller University in New York City
Power Grid Control, with Christian Rehtanz, from Switzerland-based engineering giant ABB
Microfluidic Optical Fibers, with John Rogers, from the University of Illinois
Personal Genomics, with David Cox, chief scientific officer of Perlegen Sciences in Mountain View, CA

Here is the last paragraph of the article about nanowires.
Difficult tasks remain, such as making electrical connections between the minuscule wires and the other components of any system. Still, Peidong Yang of the University of California, Berkeley, estimates there are now at least 100 research groups worldwide devoting significant time to overcoming such obstacles, and commercial development efforts have already begun. Last year, Intel, which is working with Lieber, revealed that nanowires are part of its long-term chip planning. Smaller firms such as Nanosys and QuMat Technologies, a startup now renting space at Lund University in Sweden, are betting that nanowires will be essential components of the products they hope to sell one day, from sensors for drug discovery and medical diagnosis to flat-panel displays and superefficient lighting.

And here is a short excerpt about Bayesian statistics.
Programs that employ Bayesian techniques are already hitting the market: Microsoft Outlook 2003, for instance, includes Bayesian office assistants. English firm Agena has created Bayesian software that recommends TV shows to satellite and cable subscribers based on their viewing habits; Agena hopes to deploy the technology internationally. "These things sound far out," says Microsoft researcher Eric Horvitz, who is a leading proponent of probabilistic methods. "But we are creating usable tools now that you’ll see in the next wave of software."

A robotic Cyberknife to fight cancer

The Cyberknife is not a real knife. This is a robot radiotherapy machine which works with great accuracy during treatment, thanks to its robotic arm which moves around a patient when he breathes. According to BBC News, the first Cyberknife will be operational in February 2009 in London, UK. But other machines have been installed in more than 15 countries, and have permitted to treat 50,000 patients in the first semester of 2008. And the Cyberknife is more efficient than conventional radiotherapy devices. The current systems require twenty or more short sessions with low-dose radiation. On the contrary, and because it's extremely precise, a Cyberknife can deliver powerful radiation in just three sessions. ...



As you can see above, "the CyberKnife System uses image guidance software to track and continually adjust treatment for any patient or tumor movement. This sets it far ahead of other similar treatments. It allows patients to breathe normally and relax comfortably during treatment." And it "uses pencil beams of radiation which can be directed at any part of the body from any direction via a robotic arm;" (Credit: various Accuray pages). Here is a link to a larger version of thie picture above.

The Cyberknife is a product of Accuray, who has deployed a corporate CyberKnife website and many other local sites in various locations. Speaking about locations, here is a link to a page where you can check if you live in a place not too far from a medical institution using such a system.

Here is an excerpt from the BBC News article. "At first sight the Cyberknife looks like one of those robots used in the TV car commercials. It is a compact linear accelerator mounted on a robot arm. The cyberknife works by delivering multiple beams of high dose radiation from a wide variety of angles using a robotic arm. X-ray cameras monitor the patient's breathing and re-position the radiotherapy beam in order to minimise damage to healthy tissue. This accuracy enables tumours to be treated that are in difficult or dangerous to treat positions, such as near the spinal cord."

Of course, such a treatment is expensive. "Treatment will cost between £20,000 and £25,000."

Now, let's look at the new CyberKnife Centre in London to discover what is CyberKnife and how it works."The vast array of different angles/trajectories from which pencil beams of radiation converge upon the tumour lead to an extremely high cumulative dose of radiation therapy at the convergence point (the target/tumour) and yet a very fast 'fall-off' of dose at the periphery of the carefully mapped target. The surrounding normal tissues/organs only receive a small fraction of the high central dose of therapy."

You'll also find explanations about why the Cyberknife is about to replace surgery -- at least in some cases. "The treatment is so accurate that it's now possible to treat tumours previously thought to be inoperable. Although the results of treatment do not always show immediately, in most cases the procedure will initially stop the growth of tumours before gradually reducing their size. As there is no open surgery, the complications normally associated with an operation are eliminated, as is the need for a long recovery time. This makes treatment suitable for those who are not well enough to cope with the side-effects of surgery and most patients leave the clinic the same day as their treatment."

Old Apple Hard Drive Becomes New Atomic Mirror

Before going further, what is an "atomic mirror"? As its name implies, it reflects atoms instead of light. In this article, NewsFactor Network tells us that Cal Tech researchers have fabricated such a mirror using an old Apple disk drive.
"An atom mirror is a device that reflects impinging atoms in an analogous manner to the way a regular optical mirror reflects an incoming light beam," said Cal Tech quantum-optics researcher Benjamin Lev. "The difference between the optical mirror and the atom mirror is that ... to reflect photons one only needs a suitable metallic surface, [but] to reflect atoms one needs to create some sort of repulsive force for the atoms as they near the surface."
The Cal Tech research team "fabricated a magnetic mirror by etching a common hard drive, and used this mirror to reflect a cold cloud of cesium atoms," wrote lead researcher Hideo Mabuchi.
The common hard drive, Mabuchi explained, has several features that make it the perfect raw material for atomic mirror makers -- a large, flat, magnetic surface; smooth contours; and rigid construction.

Here is how the etched hard drive looks like.


But what can we do with such a thing?
The "atomic mirror" ultimately may help engineers create atomic lasers, ushering in new telecommunications technologies based on atoms rather than photons, and atomic -- not electronic -- signals.
Atomic optics may find application in another burgeoning tech field that remains in early infancy -- quantum-computer science. "One exciting prospect is to use the atom mirror, combined with electric fields, to perform quantum logic gates necessary for building a quantum computer."

Nanocomputing: Simple Optoelectronic Devices Perform Logic Operations

Researchers at the Georgia Institute of Technology have demonstrated a new type of nanometer-scale optoelectronic (which combine light and electronics technologies) device that can perform addition and other complex logic operations.

These quantum devices are based on arrays of individual electroluminescent silver nanoclusters.
"In effect, we are demonstrating optoelectronic transistor behavior," said Robert Dickson, a professor in Georgia Tech's School of Chemistry and Biochemistry. "Instead of measuring current output as in standard electronic transistors, we measure electroluminescent output for a given voltage input. Our devices act in a way that is analogous to a transistor with light as the output instead of electrical current."

Each cluster contains between two and eight silver atoms, and emits light when electrically excited by a specific voltage. Operating the device requires a pair of separate electrical pulses, the second of which generates electroluminescence when specific nanoclusters are activated according to voltage level.

Individual clusters can operate as logic gates with AND, OR, NOT, and XOR functions via the application of different pulses, while more complex operations can be performed by increasing the number of clusters.
"By using this complicated on-off behavior and the discrete energy levels of different molecules, we can get complicated behavior in a relatively simple device," said Dickson.
Increasing the number of clusters operating together could permit formation of large optoelectronic arrays able to perform complex operations. As long as each cluster could be separated enough to be resolved by a camera, arrays could contain thousands of clusters.
Dickson doesn't expect the new optoelectronic devices to replace traditional semiconductor-based computers for ordinary tasks. Instead, they might be used for complex and highly specialized computations that are difficult for traditional computing systems.

He also hopes that the breakthrough will inspire other researchers to reconsider nanometer-scale computing.

This research is funded by the National Science Foundation and will be reported in the March 18 issue of the journal Proceedings of the National Academy of Sciences.

Microsoft's Palladium -- or TCP/MS

named Palladium, which got lots of attention from the press last week. The Palladium feature should appear around 2006 at the same time as the next generation of Windows, code-named Longhorn.

So why this concern about a product which will not appear before four years (or more, we're dealing with Microsoft)? Simply because it will invade your privacy. And because it will cost you lots of money (again, we are talking about Microsoft.)

As wrote Reuters on July 2, "Instead of storing sensitive information such as passwords on software, Palladium will also aim to protect information at the hardware level."

But let's use some words from a really talented columnist, Robert X. Cringely, to give you a clearer view of what will represent Palladium for you.
Last August, I wrote of a rumor that Microsoft wanted to replace TCP/IP with a proprietary protocol -- a protocol owned by Microsoft -- that it would tout as being more secure. Actually, the new protocol would likely be TCP/IP with some of the reserved fields used as pointers to proprietary extensions. I called it TCP/MS in the column.
This week, Microsoft announced Palladium through an exclusive story in Newsweek written by Steven Levy, who ought to have known better. Palladium is the code name for a Microsoft project to make all Internet communication safer by essentially pasting a digital certificate on every application, message, byte, and machine on the Net, then encrypting the data EVEN INSIDE YOUR COMPUTER PROCESSOR. Palladium compatible hardware (presumably chipsets and motherboards) will come from both AMD and Intel, and the software will, of course, come from Microsoft. That software is what I had dubbed TCP/MS.
The point of all this is simple. It may actually make the Internet somewhat safer. But the real purpose of this stuff, I fear, is to take technology owned by nobody (TCP/IP) and replace it with technology owned by Redmond. That's taking the Internet and turning it into MSN. Oh, and we'll all have to buy new computers.
This is diabolical. If Microsoft is successful, Palladium will give Bill Gates a piece of every transaction of any type while at the same time marginalizing the work of any competitor who doesn't choose to be Palladium-compliant. So much for Linux and Open Source, but it goes even further than that. So much for Apple and the Macintosh. It's a militarized network architecture only Dick Cheney could love.

Rack 'n Roll: Why you should take a Mac user to lunch

Apple Computer, Inc. introduced the Xserve system about two months ago. I don't know if the product will be successful, but the specifications are impressive.

You can pack 2 G3 processors with 2 gigabytes of memory and 480 gigabytes of disk storage in 1U unit. Or you can have 84 processors with almost 20 terabytes of storage in one big rack, for a peak performance of 630 gigaflops per second.

And if it wasn't enough, Apple has a great motto. Here it is.


All the details about this server are here.

After this -- rather admiring -- introduction, let's see what LinuxWorld has to say about the Unix market and the Xserve.
Although the actual numbers are bit fuzzy, it seems that Sun leads the Unix market in terms of the number of users served, IBM leads in Unix related revenues, and Apple, not Dell or HP, sells the most Unix boxes. Those machines run the MacOS X layer on top of Darwin, an open source BSD variant with a MACH kernel.
Like Linux, the underlying Unix for MacOS X is an open source production and, again like Linux, it has all the traditional Unix virtues including high reliability, network compatibility, efficient resource use, and access to a wide variety of lower cost, cutting-edge tools and applications.


Apple should ship almost 4 million Unix desktops this year, and each one of them represents a new opportunity for open source ideas to take root and for products like OpenOffice.org to find users. Equally importantly, each time a Mac moves into an office environment it gets harder to maintain the fiction that homogeneous (meaning all Windows) systems are cheaper or easier to run.

Paul Murphy compares prices for similar systems from Sun, Dell and Apple boxes with different OSes. Guess what: Apple is the cheapest.

He also compares hardware and software prices of a desktop system running a Microsoft operating system today and twenty years ago.

Here are the numbers for 1981.
Hardware: $2,959
Microsoft OS: $39.95

Now, let's come back to 2002.
Hardware: $450
Microsoft OS: $199

In other words, the hardware price decreased by 85 percent while the operating system from Microsoft increased by 500 percent.

A 'Smart' Email Software Organizes Your Tasks

You probably receive dozens of emails every day about various aspects of your business or personal life. And because your email program doesn't understand the relationship between messages, except for the occasional thread, you have to manage your activities by looking through lists of emails. But now, two computer scientists from University College Dublin (UCD) and IBM have developed the Active Email Manager (AEM) and have even filed patents for a 'smart' email program. Their prototype can make the difference between work-related tasks -- and assign them to a workflow -- and personal email. This software could be integrated in commercial products from IBM within two years. Read more...

Here are some details about the project.
A University College Dublin (UCD) scientist has filed a patent application for a new technology that he believes can turn email into a much more effective business tool. US-born Dr Nicholas Kushmerick, a senior lecturer in the Department of Computer Science at UCD, has developed the technology over the past year during his part-time position as visiting scientist on IBM’s Centre for Advanced Studies (CAS) initiative.
Kushmerick developed the technology, known as Active Email Manager (AEM), in concert with New York-based IBM researcher Tessa Lau. Together they developed a machine-learning algorithm that automatically keeps track of tasks and associated emails, in order to build up a work flow for each task.
"The vision is that rather than come in and download all your emails, you could just call up your to do list and manage your activities," Kushmerick explains.

Now, the two researchers have developed a prototype of the software and are busy testing it. And IBM wants to use the technology in some of its future products.
The technology is currently being appraised by two separate research groups within IBM, with the aim of turning into a commercial product. One of these is the Massachusetts-based product development team that develops IBM’s suite of collaboration software, Lotus Workplace. "There are some pretty intensive discussions going on now to see if we can get enough attention and convince them that our idea is feasible and that they would put it into their product pipeline," says Kushmerick.

The research work has been presented at the 2005 International Conference on Intelligent User Interfaces (IUI 2005) which has been held on January 9-12, 2005, in San Diego, California. You can find the abstract of the paper called "Automated Email Activity Management: An Unsupervised Learning Approach" in the 2005 Conference Program.
Many structured activities are managed by email. For instance, a consumer purchasing an item from an e-commerce vendor may receive a message confirming the order, a warning of a delay, and then a shipment notification. Existing email clients do not understand this structure, forcing users to manage their activities by sifting through lists of messages. As a first step to developing email applications that provide high-level support for structured activities, we consider the problem of automatically learning an activity's structure. We formalize activities as finite-state automata, where states correspond to the status of the process, and transitions represent messages sent between participants. We propose several unsupervised machine learning algorithms in this context, and evaluate them on a collection of e-commerce email.

Please note that this work received a Honorable Mention for Outstanding Paper Award at IUI 2005.

For more information, here is a link to the full version of this paper (PDF format, 8 pages, 234 KB), available from Kushmerick's website

Software Agents Can Help Time-Stressed Teams

Penn State researchers have developed software agents which can help human teams to react more accurately and quickly in time-stressed situations than human teams acting alone. According to this news release, the software was tested in a military command-and-control simulation. "When time pressures were normal, the human teams functioned well, sharing information and making correct decisions about the potential threat." But when the pressure increased, the human teams made errors who would have cost lives in real situations. The decisions taken by agent-supported human teams were much better. Now, it remains to be seen if this software can be used in other stressful situations, such as for emergency management operations. Read more...

Here is a description of the simulation experiment.
In the simulation, team members had to protect an airbase and supply route which were under attack by enemy aircraft. The scenarios were configured with different patterns of attack and at different tempos. The situation was complicated because team members had to determine at first if the aircraft were neutral or hostile. Furthermore, two team members were dependent on the third whose role was to gather information and communicate it to them.
"When the teams don't know if the incoming aircraft is the enemy, the defense team can't attack, and the supply team takes action to avoid the incoming threat which causes a delay in delivery," said Shuang Sun[, one of the researchers.] "These decisions lower the performance of the whole team."
When the information gatherer was supported by the researchers' R-CAST software system, the information was gathered and shared more quickly. As a result, the human-agent teams were better able to defend themselves from enemy attack and deliver supplies without delay, Sun said.

The illustration below shows the structure of the two teams used for testing, with human teams on the left, and agent-supported human teams on the right (Credit: Penn State).


And the diagram below shows how these different teams were able to destroy enemies when stress increased (Credit: Penn State).


It seems pretty obvious that software agents helped humans to better react in this stressful situation.

The researchers, Xiaocong Fan, Shuang Sun, John Yen, and Michael McNeese, have presented the results of their experiments at the Fourth International Joint Conference on Autonomous Agents and Multi-Agent Systems, which was held in Amsterdam on July 25-29, 2005 (AAMAS 2005).

Here is a link to their full paper named "Extending the Recognition-Primed Decision Model to Support Human-Agent Collaboration" (PDF format, 8 pages, 413 KB). Here are some selected excerpts from the introduction.
The aim of this research is to support human decision making teams using cognitive agents empowered by a collaborative Recognition-Primed Decision (RPD) model. In this paper, we ¯rst describe an RPD-enabled agent architecture (R-CAST), in which we have implemented an internal mechanism of decision-making adaptation based on collaborative expectancy monitoring, and an information exchange mechanism driven by relevant cue analysis.
We have evaluated R-CAST agents in a real-time simulation environment, feeding teams with frequent decision-making tasks under different tempo situations. While the result conforms to psychological findings that human team members are extremely sensitive to their workload in high-tempo situations, it clearly indicates that human teams, when supported by R-CAST agents, can perform better in the sense that they can maintain team performance at acceptable levels in high time pressure situations.

New Software Brings Lip-Reading to Cell Phones

After talking several times here about technologies designed to help blind people, it's time to look at one which will benefit deaf people.

With this technology, they will be able to use cell phones, but not in the streets. They'll need a PC -- soon a PDA -- and for the time being, to be in Israel.

Reuters unveils the story.
Israel's largest mobile phone operator Cellcom and Israeli start-up Speechview (Your Link to the Hearing World) launched on Tuesday a worldwide patented software that will allow the deaf and hard of hearing to communicate through mobile phones.
The product, LipCcell, is a software installed in the user's computer and connected with a cable to a cell phone. When the deaf user gets a call, the software translates the voice on the other side of the line into a three dimensional animated face on the computer, whose lips move in real time synch with the voice allowing the receiver to lip read.
The software can be used initially only with a computer or laptop, said SpeechView chief executive Tzvika Nayman, though future developments will allow the software to be installed on personal digital assistants.

CSI IN A BOX

If you're not familiar with U.S. TV, CSI (Crime Scene Investigation) is a weekly CBS show which gathers about 20 million viewers and which routinely speaks about forensic facial reconstruction. Reconstructing human faces from skulls found by the police is nothing new, and it has already being done with computers. But this was a long process. Now, a Canadian startup company, HumanCore has developed a new human anatomy software to do the job in about 30 minutes. This software has already attracted the attention of law enforcement forces in the U.S. and in Canada. But it also can be used by mechanical engineers to design new products or even by the clothing industry. Read more for an interview with one of the co-founders of HumanCore as well as pictures and additional details about this innovative software.

HumanCore is basically a technology to simulate the assembly of human bones, muscles, cartilages and skin in software using parametric CAD technology. There are many applications for this technology, but the first release focuses on automated cranio-facial reconstruction.

If you're not familiar with U.S. TV, CSI (Crime Scene Investigation) is a weekly CBS show which gathers about 20 million viewers and which routinely speaks about forensic facial reconstruction. Reconstructing human faces from skulls found by the police is nothing new, and it has already being done with computers. But this was a long process. Now, a Canadian startup company, HumanCore has developed a new human anatomy software to do the job in about 30 minutes. This software has already attracted the attention of law enforcement forces in the U.S. and in Canada. But it also can be used by mechanical engineers to design new products or even by the clothing industry. Read more for an interview with one of the co-founders of HumanCore as well as pictures and additional details about this innovative software.

HumanCore is basically a technology to simulate the assembly of human bones, muscles, cartilages and skin in software using parametric CAD technology. There are many applications for this technology, but the first release focuses on automated cranio-facial reconstruction.

Here is how it works.

As seen in a CSI episode (Who Are You), the current method used in forensics is to create a clay reconstruction on top of the skull to identify. It usually takes from 3 days to several months, depending on the skills of the artists and amounts of details. The following photo is an example of this approach (Credit: Jean N. Prudent, HumanCore, as well as all the other pictures below).



With HumanCore, you just have to scan the geometry of the skull and load this into the program.



Then, you must reorient the skull and apply "fit markers" which are points on the skull that allow the software to recognize the main morphological features. Without these, there is no way for the software to know where are the eye sockets or the nose cavity for example. At this stage, some regions can be "painted" red to define them as "undefined". This allows the software to reconstruct damaged skulls with missing regions, which is a common occurrence in murder cases.



In the next step, you just launch the automated reconstruction process. What the software does is try to deform its internal bone representation to fit the scanned skull's geometry. The process is made possible by the presence of the fit markers.



Finally, you must input the vital stats (age, gender, ancestry and level of fitness) and select the appropriate "standard average tissue depth table" for the reconstruction. The vital stats are usually provided by an experienced physical anthropologist since it is impossible for the software to determine this on its own. The tables define the depth of soft tissue between the skull and the skin. The software includes standard tables that have been compiled for several population groups all over the world (Europeans, Asians, Africans, etc.).



Jean Prudent, the author of the software, told me that the whole process lasted less than 30 minutes, which obviously, would speed forensics research. I've asked him what he thought about the market.
Cranio-facial reconstruction is a niche market, but its physical anthropology foundations, just like mathematics and physics have applications in many other fields including paleo-anthropology, gross anatomy, kinanthropology and of course ergonomics. We're currently working on a SolidWorks plug-in implementation, which will allow designers of implants and anaplastologists to test their designs before.

This software will be sold for US $1,495. This might be pocket money for a large city police department, but if the company wants to enter other market segments, such as the fashion industry, which uses many freelance graphic designers, will the price be too high? Here is Prudents' answer.
You are right: this is not tool for artists. This is a scientific software and as such, it was priced to be affordable to current users of Mathematica and MathCAD ($1,200 to $1,700) as well as users of SolidWorks ($3,000-$4,000). Students enrolled in legitimate university programs benefit from the same deep discount available for Mathematica, which makes it affordable to them too. Initial feedback tells me that the pricing is correct in that context. The individuals from the fashion industry who contacted us were interested in the scientific aspects of human variations (linking population surveys to production planning) -- they were not designers or artists."

With its parametric musculo-skeleton engine integrated with a 2D/3D graphics editor, this software also can be used for many other CAD projects, such as designing wheelchairs or portable consumer electronics.

Below is an illustration showing the parametric musculo-skeleton engine.



For more information about this software, which should be available soon, please read the HumanCore brochure from which the last image was extracted (PDF format, 2 pages, 10.6 MB).

[Disclaimer: I have absolutely no ties with this company and I haven't personally used this software. So please try it before opening your checkbook.]

Sources: Roland Piquepaille, January 4, 2006; and HumanCore web site

INTEL

Actually, the real title of this Peter Lewis's article for Fortune is "Intel Outside." By this, he means that Intel has grand plans for expansion in wireless territory.
"Centrino Inside" is the catch phrase that you'll hear ad nauseam starting later this month. What the heck is a Centrino? It's a major departure from earlier mobile microprocessor designs and the centerpiece of Intel's plan to promote -- and dominate -- computing in wireless network environments.
Centrino, Intel says, is the first step toward a future when all computing devices communicate and all communications devices compute.

Lewis found that "Centrino" is an anagram for "no cretin," "rent icon," and "not nicer" and looks if these anagrams make sense. Let's check the first one, "no cretin".
Although it operates at lower clock speeds than current Intel Pentium-4 Mobile chips, the main Centrino processor is far from computationally challenged. It's a new low-power, high-performance processor called the Pentium-M.
That's only part of the story. The Centrino package comprises three main parts: Besides the Pentium-M, it includes a wireless radio chip for communicating securely with the growing number of 802.11b (Wi-Fi) wireless network hot spots, and a supporting chipset that Intel says will help improve the battery life and graphics performance of mobile devices.

And here are Intel's wireless plans.
Intel says Centrino notebooks can roam seamlessly among thousands of "Centrino-certified" hot spots in airports, hotels, and other public places. Intel plans to spend hundreds of millions of dollars this year -- and devote 2,500 employees -- to test hot spots and third-party components, making sure that Centrino-based devices work flawlessly.

SAP AFS

Everyone knows SAP, its history and its functionality. But I think an understanding of SAP AFS is not as common, so I'd like to give you a brief introduction.

[[SAP AFS]] is the SAP solution focused on the Apparel and Footwear Industry (AFS stands for Apparel and Footwear Solutions). In the standard categorization of SAP’s industry specific solutions, AFS is categorized under “Consumable Products”.

SAP AFS is built on the SAP Core with valuable extra functionality to support the specific needs of the apparel and footwear industry. Among these are the ability to handle sizes (grids), ability to categorize products based on their common features (like country importing the goods, quality grade of the product) and the ability to handle seasonality.

Handling materials in sizes is a special requirement in the apparel and footwear industry. This is achieved with AFS Grid functionality. AFS grids are three dimensional in nature. In other words, there can be three variables maintained in the grid value. For example, if the user wants to separate products by labeled size (Small – S), side seam length (23”) and the collar size (16”) this can be achieved using AFS grids. All three variables will be maintained independently. Once these three variables are put together it will make a unique combination. The Grid value in the above example will look like “S-23-16”. This reduces the data volume greatly and reduces the complexities.

Categories are used in AFS extensively to categorize products which posses the same characteristics. For example, the same product can be graded as Quality-A and Quality-B after the quality checks. These products will have different market values and customer demand patterns. This can be replicated in SAP AFS easily using categories. The requirement and the supply will be handled based on the category allocation.

Seasonality is probably the most important aspect of the apparel and footwear trade. The same product can change its characteristics based on the season. For example, the color or some label details might change with the season. This is where AFS seasonality comes in to play. Seasonality is especially prominent in the SD (Sales and Distribution) module. But seasonality affects the full supply chain. Seasonal settings can be maintained in AFS in combination with Size and Category. This is of great use in effectively maintaining and managing the large volumes of data specific to the industry.

SAP

(1) Short for Session Announcement Protocol, an announcement protocol used to communicate the relevant session setup information to prospective participants during multicast sessions. SAPs typically use the Real-Time Transport Protocol (RTP).

(2) Short for Service Advertising Protocol, a NetWare protocol used to identify the services and addresses of servers attached to the network. The responses are used to update a table in the router known as the Server Information Table.

(3) Short for Secondary Audio Program, an NTSC audio channel used for auxiliary transmission, such as foreign language broadcasting or teletext.

(4) Short for SAP America, Inc., Lester, PA, the U.S. branch of the German software company, SAP AG. The name SAP stands for Systems, Applications and Products in Data Processing. SAP's R/3 integrated suite of applications and its ABAP/4 Development Workbench became popular starting around 1993.

WHAT IS ERP ALL ABOUT

Some of the features of Open Source ERP are as follows:
Cutting down the costs
This is the first and foremost advantage of ERP. Companies really find it taxing to pay the additional cess whenever a renewal is made to existing license system. Since they invest a large amount in the initial stage they find difficulty in paying again and again. All these have been completely done away with by the intervention of open source ERP. All that the company has to do is to download the software and make use of it.

However this issue has drawn lot of controversies recently. Some companies feel that open source is not promising enough to meet the application deliverables. The others find it convenient to make the necessary payments for the service rendered. This debate continues endlessly in one end while in the other end the fact remains that erp open source has made it possible for S.M.E.'s to enter the market owing to the cheap cost.

Companies don't prefer to go for open source applications due to cost alone. They choose it if and only they are convinced that it will help them keeping their IT infrastructure and requirements in mind.

DIFFERENCE BETWEEN ERP AND OPEN SOURCE

Some of the differences are as follows:
Pricing
Commercial ERP is an expensive package and suitable only for bigger corporations. The prices do vary significantly but according to the size of the company and volume of business. In any cases they have been found to be extremely costly irrespective of the quantum in which they are purchased. These packages are not subject to flexibility and molding. Their usage modalities are rarely liberal and cause troubles when they are modified. Hence the deployments also turn out to be costly and inconvenient due to the procedures involved, in the future. Another major allegation against the package is that they consist of lot of hidden costs.

The greatest advantage of Open source ERP application is that it is available at free of cost. This is a motivating factor to companies that shun the idea of ERP for the sake of price tags. Even the licenses are available along with the source code. This essentially makes sure that the procedures for training are very easy. In the case of commercial ERP vendors don't disclose the prices initially for it would make any sane person to refuse the order. He is later blamed for inflating the costs. This feature is unknown in open source ERP as everything is free.
Flexibility
This important feature was found absent in commercial ERP. It was a difficult task to make them suit the working pattern. Instead of modifying those in wake of the inherent difficulties companies had no other choice but to change their way of business. This was often a debacle even though it was argued that the best ERP were designed for the best business practices.

However when it comes to open source ERP everything was decided by the code .Therefore companies can do the necessary modifications in code and without much support from the vendor. Another advantage of open source is that it does not interfere with the regular schedule of the company during the implementation stage. However the open source ERP is devoid of this trouble as the regular business can go uninterrupted irrespective of ERP implementation or deployment or reengineering process or anything else to do with ERP is carried on a full fledged scale. This is a major difference between commercial and open source ERP applications.
Duration, Dependence and Results
The time allotted for implementing open source ERP is very less when compared with commercial ERP than open source. The innumerable number of complexions in commercial ERP calls for longer time span. It consumes a lot of time not only during implementation but in every stage of ERP process due to the nature of work involved.

When it comes to the question of relying on the vendor the open source ERP vendor enjoys a considerable edge than the commercial ERP. Since open source is a (self) built in process companies rely less on vendors and takes care of needs by themselves. The productivity is also high in open source ERP systems and the failure rates are very low.
Training
Lots of training is required for using commercial ERP. It calls for lots of investments in terms of time and money. There are lots of controversies regarding them. If they don't give the necessary impetus the results will be poor. Similarly the companies are largely debating the validity of training sessions designed and handled exclusively by the ERP vendor.

On the other hand Open Source ERP does not require much training. The source code is more than a training manual. The results are also bound to be effective because the user gets to learn through the process of self teaching. The company need not spend much on training and makes a minimal utilization of the resources. This is another way of reducing the level of dependence on the ERP vendor.
Security
Commercial ERP systems are less secure when compared with open source erp applications. They are by and large prone to the traps and pitfalls of hackers (no matter however tight is the segregation of the components).Even though open source ERP makes everything transparent and available in the public domain it bring into the notice of user whenever something goes wrong.
Conclusion
The differences between commercial and open source ERP applications show the Edge enjoyed by open source ERP players. However the fact remains that they are not recognized well in the market for fear of failure as customers are still prepared to pay for results. They can go a long way only if the awareness is high (which is encouraging in the current scenario).

ERP

Prior experience with ERP
The company should check if the platforms in question have already been experimented successfully in an ERP environment. The more they have been tried and tested the greater is their credibility. This helps to increase the comfort zone psychologically and technically because of the feeling that the platform has demonstrated competence in ERP.
Networking facilities
The organization should consider the channels used for disseminating information and its relevance with the platform in choice. By and large the features should not be subject to any amount of rigidity. The platform should basically allow fee flow and exchange of datas between the networks and be applicable to work in the latest atmosphere.
Proper designs
ERP applications are often more than complex and taxing. Thing will become worse if the preferred ERP platforms are also of the same stature. Therefore the designing part of the platform will be able to speak for itself only if they are done unequivocally. The designs should be in such a manner that they can be used freely either in integrated or distributed applications.
Effective outputs
The ERP platforms should contribute valuably to the output. The system should work well and be able to balance the flaws that arise during procedures. All platforms are bound to face struggle while working in the introduction stage. This problem is an unavoidable one especially in the introduction stage. The preferred ERP platforms should resist the errors that come during the procedures, even though they may be due to some functional component and not directly connected with the platform.
Sustainability
The company needs to be assured that they can implement this platform for a considerable longer time. The ERP platform will be retained only if they are satisfied that it justifies the costs incurred. The preferred ERP platform should have the capacity of being tuned well within the company's environment.
Assortment of related levers
The company has to not only check the comfort level of the platform with the main applications but also with the supporting levers. Since they go in hand during the process it becomes important to give them the due attention. This is important in ensuring that there are no hassles in when the entire operations are set in motion.
Conclusion
These general characters need not necessarily suit every company. They can be taken as parameters for assessment but to take them as deciding factors will not suffice. The company should as well take all the relevant internal and external factors in to account to decide on this matter before choosing ERP platforms.

ERP

What is the OSI Model

The OSI model is a reference model which most IT professionals use to describe networks and network applications.

The OSI model was originally intended to describe a complete set of production network protocols, but the cost and complexity of the government processes involved in defining the OSI network made the project unviable. In the time that the OSI designers spent arguing over who would be responsible for what, TCP/IP conquered the WORLD.



The OSI Model vs. The Real World

The most major difficulty with the OSI model is that is does not map well to the real world!

The OSI was created after many of todays protocols were already in production use. These existing protocols, such as TCP/IP, were designed and built around the needs of real users with real problems to solve. The OSI model was created by academicians for academic purposes.

The OSI model is a very poor standard, but it's the only well-recognized standard we have which describes networked applications.

The easiest way to deal with the OSI model is to map the real-world protocols to the model, as well as they can be mapped.

Layer Name Common Protocols
7 Application SSH, telnet, FTP
6 Presentation HTTP, SMTP, SNMP
5 Session RPC, Named Pipes, NETBIOS
4 Transport TCP, UDP
3 Network IP
2 Data Link Ethernet
1 Physical Cat-5

The difficulty with this approach is that there is no general agreement as to which layer of the OSI model to map any specific protocol. You could argue forever about what OSI model layer SSH maps to.

A much more accurate model of real-world networking is the TCP/IP model:

TCP/IP Model
Application Layer
Transport Layer
Internet Layer
Network Interface Layer

The most significant downside with the TCP/IP model is that if you reference it, fewer people will know what you are talking about!

For a better description of why the OSI model should go the way of the dodo, disco, and DivX, read Kill the Beast: Why the Seven-Layer Model Must Die.
Books on the OSI Model
OSI Reference Model Pocket Guide
OSI Reference Model Pocket Guide

The pocket-sized reference provides the help you need to effectively prepare for the OSI related questions you'll encounter on Cisco's CCNA certification exam. It also serves as a practical tutorial explaining the confusing layered architecture of modern network design. The Open Systems Interconnection (OSI) Reference Model is the basis for much of modern networking. It is a layered model under which all networking protocols and services are defined. Many attempts at explaining OSI become too abstract and require a very high level of network design knowledge and experience to even begin to understand. This pocket guide presents a more practical, user-friendly examination of the topic. The author, Howard Berkowitz, CCSI, is an accomplished network designer and instructor who has written two books on network architecture. He effectively utilizes real-life experiences, analogies, and humor to clear up some of the most common confusions and misconceptions associated with the OSI Reference. Sample questions (with correct answers and complete explanations), similar to those found on Cisco's certification exams are included in this reference. These questions (with correct answers and complete explanations) allow readers to test their OSI knowledge level before attempting the actual tests.
OSI Reference Model for Telecommunications
OSI Reference Model for Telecommunications

The OSI (Open System Interconnection) Reference Model is a cornerstone of modern network design. Although the OSI model has become almost synonymous with data communications, it serves the public switched telephone network (PSTN) as well and is a productive way to organize and teach the building blocks of telecom systems. In OSI Reference Model for Telecommunications, hands-on expert Debbra Wetteroth provides telecom staffers the information they need to gain a working knowledge of this essential telecom service architecture and equipment. Her style that breaks down the barriers between data and voice vocabularies. This quick reference to the OSI model puts the data you need everyday at your fingertips.
OSI: A Model for Computer Communications Standards
OSI: A Model for Computer Communications Standards




Top 5 Free Networking Tools

Free White Papers on Networking
Reducing the Cost of Freedom: Wireless Expense Management
Find out how you can achieve Best-in-Class results. Access Your Complimentary Copy Today....
Take Advantage of Oracle's 2 Day DBA Course
This course is designed to complement the Oracle 2 Day DBA documentation....
Free Network Mapping Tool for Microsoft® Office Visio® Professional 2007 Users
Don't map your network by hand – let LANsurveyor Express...
ISO 14001:2004 Road Map
This road map can be used as a resource for your company as you travel down the road to ISO 14001:2004 certification....
IT Process Automation and VMware - Workload Automation for Real and Virtual Environments
Find out how UC4 Workload Automation software integrates with...
No Stone Unturned: Strategies for Cash Management in Hard Times
Under challenging economic circumstances, learn how mid-size companies are modifying...
Bookmark What is the OSI Model?
AddThis
Latest Blog Posts

* Google Hacked?

* Recycle your old phone – Make some money, and save the environment too!

* SourceForge vs. Freshmeat

* Fastest Web Browser: Google Chrome

* New Webmaster Forum

* Ubuntu Security Tools

* Terrorists Go Hi-Tech

* How to Dry a Cell Phone That's Come in Contact with Water

* The Best Christmas Gift for a Techie

* Vulnerability Management for Dummies


Books on the OSI Model
OSI Reference Model Pocket Guide
OSI Reference Model Pocket Guide

The pocket-sized reference provides the help you need to effectively prepare for the OSI related questions you'll encounter on Cisco's CCNA certification exam. It also serves as a practical tutorial explaining the confusing layered architecture of modern network design. The Open Systems Interconnection (OSI) Reference Model is the basis for much of modern networking. It is a layered model under which all networking protocols and services are defined. Many attempts at explaining OSI become too abstract and require a very high level of network design knowledge and experience to even begin to understand. This pocket guide presents a more practical, user-friendly examination of the topic. The author, Howard Berkowitz, CCSI, is an accomplished network designer and instructor who has written two books on network architecture. He effectively utilizes real-life experiences, analogies, and humor to clear up some of the most common confusions and misconceptions associated with the OSI Reference. Sample questions (with correct answers and complete explanations), similar to those found on Cisco's certification exams are included in this reference. These questions (with correct answers and complete explanations) allow readers to test their OSI knowledge level before attempting the actual tests.
OSI Reference Model for Telecommunications
OSI Reference Model for Telecommunications

The OSI (Open System Interconnection) Reference Model is a cornerstone of modern network design. Although the OSI model has become almost synonymous with data communications, it serves the public switched telephone network (PSTN) as well and is a productive way to organize and teach the building blocks of telecom systems. In OSI Reference Model for Telecommunications, hands-on expert Debbra Wetteroth provides telecom staffers the information they need to gain a working knowledge of this essential telecom service architecture and equipment. Her style that breaks down the barriers between data and voice vocabularies. This quick reference to the OSI model puts the data you need everyday at your fingertips.
OSI: A Model for Computer Communications Standards
OSI: A Model for Computer Communications Standards




Top 5 Free Networking Tools

Free White Papers on Networking
Reducing the Cost of Freedom: Wireless Expense Management
Find out how you can achieve Best-in-Class results. Access Your Complimentary Copy Today....
Take Advantage of Oracle's 2 Day DBA Course
This course is designed to complement the Oracle 2 Day DBA documentation....
Free Network Mapping Tool for Microsoft® Office Visio® Professional 2007 Users
Don't map your network by hand – let LANsurveyor Express...
ISO 14001:2004 Road Map
This road map can be used as a resource for your company as you travel down the road to ISO 14001:2004 certification....
IT Process Automation and VMware - Workload Automation for Real and Virtual Environments
Find out how UC4 Workload Automation software integrates with...
No Stone Unturned: Strategies for Cash Management in Hard Times
Under challenging economic circumstances, learn how mid-size companies are modifying...
Bookmark What is the OSI Model?
AddThis
Latest Blog Posts

* Google Hacked?

* Recycle your old phone – Make some money, and save the environment too!

* SourceForge vs. Freshmeat

* Fastest Web Browser: Google Chrome

* New Webmaster Forum

* Ubuntu Security Tools

* Terrorists Go Hi-Tech

* How to Dry a Cell Phone That's Come in Contact with Water

* The Best Christmas Gift for a Techie

* Vulnerability Management for Dummies

HARDWARES

Screen readers :- A screen reader is a TSR. This means it will stay loaded in the background to make your normal applications talk. The actual sound comes from your sound card through either the speakers or headphones. This is a step forward from the separate synthesizers that used to be needed. The screen reader is able to determine what needs to be spoken as it appears on the screen and it gives you several ways to have the information spoken back to you as you type. You may hear either full words when you press the space bar or every letter you type. The screen reader will work better with some programs than others. It will depend on how close the program follows certain standards. Also screen readers come with scripting for programs that are widely used like MS Word and Internet Explorer. These programs will work almost flawlessly.

Screen enlargement software :- These programs also load in the background. Both the widely used screen enlargment packages come in two levels. They can be purchased to enlarge information on the screen or with an addition of speech to help people with more limited vision. This is not as much speech as in a screen reader and is not enough speech for someone who has no usable vision. These programs come with a collection of features that give you a different ways of viewing the screen. They will magnify the information from two to sixteen times. As you can imagine, if you make the information too large, you will greatly diminish the usability of the product. You can also change the cursor to make it easier to follow and you can change the background and foreground colors.

Optical Characters Recognition Products :- ORC products are available for people who are blind and low vision as well as for people with learning disabilities. Originally people with learning disabilities used the bilndness products but recently new products have been developed for people who can see and perfectly well but have difficulties processing information. ORC packages use standard scanners to bring a picture of the printed page into the computer. The text is then recognized and read aloud as well as displayed in large print. The document can be edited and changed and brought into another program such as a word processor. The learning diability products add features for writing such as word prediction and include other features to help people study and retain information.

Braille translation software :- The Braille translator will convert computer text that has been stored or created, into usable grade 2 braille. This is the standard braille that is used in the United States. Math will also be needed to be translated into Nemeth Code before it is embossed. Braille translation software makes it easy for someone who knows very little about braille to produce braille.

Software products:

Screen readers :- A screen reader is a TSR. This means it will stay loaded in the background to make your normal applications talk. The actual sound comes from your sound card through either the speakers or headphones. This is a step forward from the separate synthesizers that used to be needed. The screen reader is able to determine what needs to be spoken as it appears on the screen and it gives you several ways to have the information spoken back to you as you type. You may hear either full words when you press the space bar or every letter you type. The screen reader will work better with some programs than others. It will depend on how close the program follows certain standards. Also screen readers come with scripting for programs that are widely used like MS Word and Internet Explorer. These programs will work almost flawlessly.

Screen enlargement software :- These programs also load in the background. Both the widely used screen enlargment packages come in two levels. They can be purchased to enlarge information on the screen or with an addition of speech to help people with more limited vision. This is not as much speech as in a screen reader and is not enough speech for someone who has no usable vision. These programs come with a collection of features that give you a different ways of viewing the screen. They will magnify the information from two to sixteen times. As you can imagine, if you make the information too large, you will greatly diminish the usability of the product. You can also change the cursor to make it easier to follow and you can change the background and foreground colors.

Optical Characters Recognition Products :- ORC products are available for people who are blind and low vision as well as for people with learning disabilities. Originally people with learning disabilities used the bilndness products but recently new products have been developed for people who can see and perfectly well but have difficulties processing information. ORC packages use standard scanners to bring a picture of the printed page into the computer. The text is then recognized and read aloud as well as displayed in large print. The document can be edited and changed and brought into another program such as a word processor. The learning diability products add features for writing such as word prediction and include other features to help people study and retain information.

Braille translation software :- The Braille translator will convert computer text that has been stored or created, into usable grade 2 braille. This is the standard braille that is used in the United States. Math will also be needed to be translated into Nemeth Code before it is embossed. Braille translation software makes it easy for someone who knows very little about braille to produce braille.