Sunday, July 10, 2011
Autonomic Computing
“Autonomic Computing” is a new vision of computing initiated by IBM. This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry's focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user.
The term autonomic is derived from human biology. The autonomic nervous system monitors our heartbeat, checks our blood sugar level and keeps our body temperature close to 98.6 °F, without any conscious effort on our part. In much the same way, autonomic computing components anticipate computer system needs and resolve problems —with minimal human intervention. However, there is an important distinction between autonomic activity in the human body and autonomic responses in computer systems. Many of the decisions made by autonomic elements in the body are involuntary, whereas autonomic elements in computer systems make decisions based on tasks you choose to delegate to the technology. In other words, adaptable policy — rather than rigid hard coding determines the types of decisions and actions autonomic elements make in computer systems.
Ref: http://seminars4you.info/ComputerScience.html
Brain Finger Printing
Brain finger printing is based on finding that the brain generates a unique brain wave pattern when a person encounters a familiar stimulus Use of functional magnetic resonance imaging in lie detection derives from studies suggesting that persons asked to lie show different patterns of brain activity than they do when being truthful. Issues related to the use of such evidence in courts are discussed. The author concludes that neither approach is currently supported by enough data regarding its accuracy in detecting deception to warrant use in court.
In the field of criminology, a new lie detector has been developed in the United States of America. This is called “brain finger printing”. This invention is supposed to be the best lie detector available as on date and is said to detect even smooth criminals who pass the polygraph test (the conventional lie detector test) with ease. The new method employs brain waves, which are useful in detecting whether the person subjected to the test, remembers finer details of the crime. Even if the person willingly suppresses the necessary information, the brain wave is sure to trap him, according to the experts, who are very excited about the new kid on the block.
Brain Finger printing is designed to determine whether an individual recognizes specific information related to an event or activity by measuring electrical brain wave responses to words, phrases, or pictures presented on a computer screen. The technique can be applied only in situations where investigators have a sufficient amount of specific information about an event or activity that would be known only to the perpetrator and investigator. In this respect, Brain Fingerprinting is considered a type of Guilty Knowledge Test, where the "guilty" party is expected to react strongly to the relevant detail of the event of activity.
Existing (polygraph) procedures for assessing the validity of a suspect's "guilty" knowledge rely on measurement of autonomic arousal (e.g., palm sweating and heart rate), while Brain Fingerprinting measures electrical brain activity via a fitted headband containing special sensors. Brain Fingerprinting is said to be more accurate in detecting "guilty" knowledge distinct from the false positives of traditional polygraph methods, but this is hotly disputed by specialized researchers.
Ref: http://seminars4you.info/ComputerScience.html
MICROSOFT PALLADIUM
The Next-Generation Secure Computing Base (NGSCB), formerly known as Palladium, is a softwarearchitecture designed by Microsoft which is expected to implement "Trusted Computing" concept on future versions of the Microsoft Windows operating system. Palladium is part of Microsoft's Trustworthy Computing initiative. Microsoft's stated aim for
palladium is to increase the security and privacy of computer users. Palladium involves a new
breed of hardware and applications in along with the architecture of the Windows operating system. Designed to work side-by-side with the existing functionality of Windows, this
significant evolution of the personal computer platform will introduce a level of security that
meets the rising customer requirements for data protection, integrity and distributed
collaboration. It's designed to give people greater security, personal privacy and system integrity.
Internet security is also provided by palladium such as protecting data from virus and hacking of
data In addition to new core components in Windows that will move the Palladium effort forward, Microsoft is working with hardware partners to build Palladium components and features into their products. The new hardware architecture involves some changes to CPUs which are significant from a functional perspective. There will also be a new piece of hardware called for by Palladium that you might refer to as a security chip. It will provide a set of cryptographic functions and keys that are central to what we're doing. There are also some associated changes under the chipset, and the graphics and I/O system through the USB port--all
designed to create a comprehensive security environment.
"Palladium" is the code name for an evolutionary set of features for the Microsoft Windows operating system. When combined with a new breed of hardware and applications, "Palladium" gives individuals and groups of users greater data security, personal privacy and system integrity. Designed to work side-by-side with the existing functionality of Windows, this significant evolution of the personal computer platform will introduce a level of security that meets the rising customer requirements for data protection, integrity and distributed collaboration .
ASYMMETRIC DIGITAL SUBSCRIBER LINE (ADSL)
Digital Subscriber Lines (DSL) are used to deliver high-rate digital data over existing ordinary phone-lines. A new modulation technology called Discrete Multitone (DMT) allows the transmission of high speed data. DSL facilitates the simultaneous use of normal telephone services, ISDN, and high speed data transmission, e.g., video. DMT-based DSL can be seen as the transition from existing copper-lines to the future fiber-cables. This makes DSL economically interesting for the local telephone companies. They can offer customers high speed data services even before switching to fiber-optics.
DSL is a newly standardized transmission technology facilitating simultaneous use of normal telephone services, data transmission of 6 M bit/s in the downstream and Basic- rate Access (BRA). DSL can be seen as a FDM system in which the available bandwidth of a single copper-loop is divided into three parts. The base band occupied by POTS is split from the data channels by using a method which guarantees POTS services in the case of ADSL-system failure (e.g. passive filters).
Thursday, April 28, 2011
4G Wireless Technology
As the virtual centre of excellence in mobile and personal communications (Mobile VCE) moves into its second core research programme it has been decided to set up a fourth generation (4G) visions group aimed at harmonising the research work across the work areas and amongst the numerous researchers working on the programme. This paper outlines the initial work of the group and provides a start to what will become an evolving vision of 4G. A short history of previous generations of mobile communications systems and a discussion of the limitations of third generation (3G) systems are followed by a vision of 4G for 2010 based on five elements: fully converged services, ubiquitous mobile access, diverse user devices, autonomous networks and software dependency. This vision is developed in more detail from a technology viewpoint into the key areas of networks and services, software systems and wireless access.
The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.
The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.
Fiber Distributed Data Interface (FDDI)
The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic cable. FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and greater distances than copper. It should be noted that relatively recently, a related copper specification, called Copper Distributed Data Interface (CDDI) has emerged to provide 100-Mbps service over copper. CDDI is the implementation of FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI specifications and operations, but it also provides a high-level overview of CDDI. FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions (called counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmission, and the secondary ring remains idle. The primary purpose of the dual rings, as will be discussed in detail later in this chapter, is to provide superior reliability and robustness.
FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version.
FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version.
Hyper LAN
Recently, demand for high-speed Internet access is rapidly increasing and a lot of people enjoy broadband wired Internet access services using ADSL ( Asymmetric Digital Subscriber Line) or cable modems at home. On the other hand , the cellular phone is getting very popular and users enjoy its location-free and wire-free services. The cellular phone also enables people to connect their laptop computers to the Internet in location-free and wire-free manners. However ,present cellular systems like GSM (Global System for Mobile communications) can provide much lower data rates compared with those provided by the wired access systems, over a few Mbps(Mega bit per second).Even in the next generation cellular system, UMTS ( Universal Mobile Telecommunications System), the maximum data rate of its initial service is limited up to 384 kbps; therefore even UMTS cannot satisfy users’ expectation of high-speed wireless Internet access. Hence, recently, Mobile Broadband System (MBS) is getting popular and important and wireless LAN (Local Area Network) such as ETSI (European Telecommunication Standardization Institute) standard HIPERLAN (High PErformance Radio Local Area Network) type2 (denoted as H/2) is regarded as a key towards providing high speed wireless access in MBS. H/2 aims at providing high speed multimedia services, security of services , handover when roaming between local and wide area as well as between corporate and public networks. It also aims at providing increased throughput of datacom as well as video streaming applications. It operates in the 5 GHz band with a 100 MHz spectrum. WLAN is W-ATM based and is designed to extend the services of fixed ATM networks to mobile users. H/2 is connection oriented with a connection duration of 2 ms or multiples of that. Connections over the air are time-division multiplexed . H/2 allows interconnection into virtually any type of fixed network technology and can carry Ethernet frames, ATM cells and IP packets. Follows dynamic frequency allocation. Offers bit rates of 54 Mbps.
Hyper Threading
Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a microarchitecture perspective, this means that instructions from both logical processors will persist and execute simultaneously on shared execution resources.
The first implementation of Hyper-Threading Technology was done on the IntelXeonprocessor MP. In this implementation there are two logical processors on each physical processor. The logical processors have their own independent architecture state, but they share nearly all the physical execution and hardware resources of the processor. The goal was to implement the technology at minimum cost while ensuring forward progress on logical processors, even if the other is stalled, and to deliver full performance even when there is only one active logical processor.
The potential for Hyper-Threading Technology is tremendous; our current implementation has only just begun to tap into this potential. Hyper-Threading Technology is expected to be viable from mobile processors to servers; its introduction into market segments other than servers is only gated by the availability and prevalence of threaded applications and workloads in those markets.
The first implementation of Hyper-Threading Technology was done on the IntelXeonprocessor MP. In this implementation there are two logical processors on each physical processor. The logical processors have their own independent architecture state, but they share nearly all the physical execution and hardware resources of the processor. The goal was to implement the technology at minimum cost while ensuring forward progress on logical processors, even if the other is stalled, and to deliver full performance even when there is only one active logical processor.
The potential for Hyper-Threading Technology is tremendous; our current implementation has only just begun to tap into this potential. Hyper-Threading Technology is expected to be viable from mobile processors to servers; its introduction into market segments other than servers is only gated by the availability and prevalence of threaded applications and workloads in those markets.
OVONIC UNIFIED MEMORY
Ovonic unified memory (OUM) is an advanced memory technology that uses a chalcogenide alloy (GeSbTe).The alloy has two states: a high resistance amorphous state and a low resistance polycrystalline state. These states are used for the representation of reset and set states respectively. The performance and attributes of the memory make it an attractive alternative to flash memory and potentially competitive with the existing non volatile memory technology.
Almost 25% of the world wide chip markets are memory devices, each type used for their specific advantages: the high speed of an SRAM, the high integration density of a DRAM, or the nonvolatile capability of a FLASH memory device.The industry is searching for a holy grail of future memory technologies to service the upcoming market of portable and wireless devices. These applications are already available based on existing memory technology, but for a successful market penetration. A higher performance at a lower price is required.
The existing technologies are characterized by the following limitations. DRAMs are difficult to intergrate.SRAMs are expensive. FLASH memory can have only a limited number of read and write cycles.EPROMs have high power requirement and poor flexibility.There is a growing need for nonvolatile memory technology for high density stand alone embedded CMOS application with faster write speed and higher endurance than existing nonvolatile memories. OUM is a promising technology to meet this need. R.G.Neale, D.L.Nelson, and Gorden.E.Moore originally reported a phase-change memory array based on chalcogenide materials in 1970. Improvements in phase-change materials technology subsequently paved the way for development of commercially available rewriteable CDs and DVD optical memory disks. These advances, coupled with significant technology scaling and better understanding of the fundamental electrical device operation, have motivated development of the OUM technology at the present day technology node.
Almost 25% of the world wide chip markets are memory devices, each type used for their specific advantages: the high speed of an SRAM, the high integration density of a DRAM, or the nonvolatile capability of a FLASH memory device.The industry is searching for a holy grail of future memory technologies to service the upcoming market of portable and wireless devices. These applications are already available based on existing memory technology, but for a successful market penetration. A higher performance at a lower price is required.
The existing technologies are characterized by the following limitations. DRAMs are difficult to intergrate.SRAMs are expensive. FLASH memory can have only a limited number of read and write cycles.EPROMs have high power requirement and poor flexibility.There is a growing need for nonvolatile memory technology for high density stand alone embedded CMOS application with faster write speed and higher endurance than existing nonvolatile memories. OUM is a promising technology to meet this need. R.G.Neale, D.L.Nelson, and Gorden.E.Moore originally reported a phase-change memory array based on chalcogenide materials in 1970. Improvements in phase-change materials technology subsequently paved the way for development of commercially available rewriteable CDs and DVD optical memory disks. These advances, coupled with significant technology scaling and better understanding of the fundamental electrical device operation, have motivated development of the OUM technology at the present day technology node.
Service Oriented Architecture (SOA)
SOA is a design for linking business and computational resources (principally organizations, applications and data) on demand to achieve the desired results for service consumers (which can be end users or other services). Service-orientation describes an architecture that uses loosely coupled services to support the requirements of business processes and users. Resources on a network in a SOA environment are made available as independent services that can be accessed without knowledge of their underlying platform implementation. These concepts can be applied to business, software and other types of producer/consumer systems.
The main drivers for SOA adoption are that it links computational resources and promotes their reuse. Enterprise architecture believes that SOA can help businesses respond more quickly and cost-effectively to changing market conditions. This style of architecture promotes reuse at the macro (service) level rather than micro (objects) level.
The following guiding principles define the ground rules for development, maintenance, and usage of the SOA
SOA implementations rely on a mesh (Mesh consists of semi-permeable barrier made of connected strands of metal, fiber, or other flexible/ductile material. Mesh is similar to web or net in that it has many attached or woven strands.)Of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank-statement, or placing an online booking or airline ticket order. Instead of services embedding calls to each other in their source code they use defined protocols that describe how services pass and parse messages, using description metadata.
SOA developers associate individual SOA objects by using orchestration (Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services.). In the process of orchestration the developer associates software functionality (the services) in a non-hierarchical arrangement (in contrast to a class hierarchy) using a software tool that contains a complete list of all available services, their characteristics, and the means to build an application utilizing these sources.
The main drivers for SOA adoption are that it links computational resources and promotes their reuse. Enterprise architecture believes that SOA can help businesses respond more quickly and cost-effectively to changing market conditions. This style of architecture promotes reuse at the macro (service) level rather than micro (objects) level.
The following guiding principles define the ground rules for development, maintenance, and usage of the SOA
- Reuse, granularity, modularity, compos ability, componentization, and interoperability
- Compliance to standards (both common and industry-specific)
- Services identification and categorization, provisioning and delivery, and monitoring and tracking.
SOA implementations rely on a mesh (Mesh consists of semi-permeable barrier made of connected strands of metal, fiber, or other flexible/ductile material. Mesh is similar to web or net in that it has many attached or woven strands.)Of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank-statement, or placing an online booking or airline ticket order. Instead of services embedding calls to each other in their source code they use defined protocols that describe how services pass and parse messages, using description metadata.
SOA developers associate individual SOA objects by using orchestration (Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services.). In the process of orchestration the developer associates software functionality (the services) in a non-hierarchical arrangement (in contrast to a class hierarchy) using a software tool that contains a complete list of all available services, their characteristics, and the means to build an application utilizing these sources.
The Socket Interface
We must have an interface between the application programs and the protocol software in order to use network facilities. My seminar is on a model of an interface between application programs and TCP/IP protocols. The standard of TCP/IP protocol do not specify exactly how application programs interact with the protocol software. Thus the interface architecture is not standardized; its design lies outside of scope of the protocol suite. It is further should be noticed that it is inappropriate to tie the protocols to a particular interface because no single interface architecture works well on all systems. In particular, because protocol software resides in a computer’s operating system, interface details depend on the operating system.
In spite of lack of standards, a programmer must know about the such interfaces to be able to use TCP/IP. Although I have chosen UNIX operating system in order to explain the model, it has widely accepted and is used in many systems.One thing more, the operations that I will list here, will have no standard in any sense.
In spite of lack of standards, a programmer must know about the such interfaces to be able to use TCP/IP. Although I have chosen UNIX operating system in order to explain the model, it has widely accepted and is used in many systems.One thing more, the operations that I will list here, will have no standard in any sense.
HTAM
The amazing growth of the Internet and telecommunications is powered by ever-faster systems demanding increasingly higher levels of processor performance. To keep up with this demand we cannot rely entirely on traditional approaches to processor design. Microarchitecture techniques used to achieve past processor performance improvement–superpipelining, branch prediction, super-scalar execution, out-of-order execution, caches–have made microprocessors increasingly more complex, have more transistors, and consume more power. In fact, transistor counts and power are increasing at rates greater than processor performance. Processor architects are therefore looking for ways to improve performance at a greater rate than transistor counts and power dissipation. Intel’s Hyper-Threading Technology is one solution.
Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a micro architecture perspective, this means that instructions from both logical processors will persist and execute
simultaneously on shared execution resources. This paper describes the Hyper-Threading Technology architecture, and discusses the microarchitecture details of Intel's first implementation on the Intel Xeon processor family. Hyper-Threading Technology is an important
addition to Intel’s enterprise product line and will be integrated into a wide variety of products.
Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a micro architecture perspective, this means that instructions from both logical processors will persist and execute
simultaneously on shared execution resources. This paper describes the Hyper-Threading Technology architecture, and discusses the microarchitecture details of Intel's first implementation on the Intel Xeon processor family. Hyper-Threading Technology is an important
addition to Intel’s enterprise product line and will be integrated into a wide variety of products.
Embedded DRAM
Even though the word DRAM has been quite common among us for many decades, the development in the field of DRAM was very slow. The storage medium reached the present state of semiconductor after a long scientific research. Once the semiconductor storage medium was well accepted by all, plans were put forward to integrate the logic circuits associated with the DRAM along with the DRAM itself. However, technological complexities and economic justification for such a complex integrated circuit are difficult hurdles to overcome. Although scientific breakthroughs are numerous in the commodity DRAM industry, similar techniques are not always appropriate when highperformance logic circuits are included on the same substrate. Hence, eDRAM pioneers have begun to develop numerous integration schemes.
This seemingly subtle semantic difference significantly impacts mask count, system performance, peripheral circuit complexity, and total memory capacity of eDRAM products. Furthermore, corporations With aggressive commodity DRAM technology do not have expertise in the design of complicated digital functions and are not able to assemble a design team to
complete the task of a truly merged DRAM-logic product. Conversely, small application specific integrated circuit (ASIC) design corporations, unfamiliar with DRAM- specific elements and design practice, cannot carry out an efficient merged logic design and therefore mar the beauty of the original intent to integrate. Clearly, the reuse of process technology is an enabling lhetor en route to cost-effective eDRAM technology. By the same. account, modern circuit designers should be familiar with the new elements of eDRAM technology so that they can efficiently reuse DRAM-specific structures and elements in other digital functions. The reuse of additional electrical elements is a methodology that will make eDRAM more than just a memory’ interconnected to a few million Boolean gates.
This seemingly subtle semantic difference significantly impacts mask count, system performance, peripheral circuit complexity, and total memory capacity of eDRAM products. Furthermore, corporations With aggressive commodity DRAM technology do not have expertise in the design of complicated digital functions and are not able to assemble a design team to
complete the task of a truly merged DRAM-logic product. Conversely, small application specific integrated circuit (ASIC) design corporations, unfamiliar with DRAM- specific elements and design practice, cannot carry out an efficient merged logic design and therefore mar the beauty of the original intent to integrate. Clearly, the reuse of process technology is an enabling lhetor en route to cost-effective eDRAM technology. By the same. account, modern circuit designers should be familiar with the new elements of eDRAM technology so that they can efficiently reuse DRAM-specific structures and elements in other digital functions. The reuse of additional electrical elements is a methodology that will make eDRAM more than just a memory’ interconnected to a few million Boolean gates.
BLAST
The explosive growth of both the wireless industry and the Internet is creating a huge market opportunity for wireless data access. Limited internet access, at very low speeds, is already available as an enhancement to some existing cellular systems. However those systems were designed with purpose of providing voice services and at most short messaging, but not fast data transfer. Traditional wireless technologies are not very well suited to meet the demanding requirements of providing very high data rates with the ubiquity, mobility and portability characteristics of cellular systems. Increased use of antenna arrays appears to be the only means of enabling the type of data rates and capacities needed for wireless internet and multimedia services. While the deployment of base station arrays is becoming universal it is really the simultaneous deployment of base station and terminal arrays that can unleash unprecedented levels of performance by opening up multiple spatial signaling dimensions .
Theoretically, user data rates as high as 2 Mb/sec will be supported in certain environments, although recent studies have shown that approaching those might only be feasible under extremely favorable conditions-in the vicinity of the base station and with no other users competing for band width. Some fundamental barriers related to the nature of radio channel as well as to the limited band width availability at the frequencies of interest stand in the way of high data rates and low cost associated with wide access.
Theoretically, user data rates as high as 2 Mb/sec will be supported in certain environments, although recent studies have shown that approaching those might only be feasible under extremely favorable conditions-in the vicinity of the base station and with no other users competing for band width. Some fundamental barriers related to the nature of radio channel as well as to the limited band width availability at the frequencies of interest stand in the way of high data rates and low cost associated with wide access.
Voice User Interface
In its most generic sense a voice portal can be defined as “speech enabled access to Web based information”. In other words, a voice portal provides telephone users with a natural language interface to access and retrieve Web content. An Internet browser can provide Web access from a computer but not from a telephone. A voice portal is a way to do that.The voice portal market is exploding with enormous opportunities for service providers to grow business and revenues. Voice based internet access uses rapidly advancing speech recognition technology to give users any time, anywhere communication and access-the Human
Voice- over an office, wireless, or home phone. Here we would describe the various technology factors that are making voice portal the next big opportunity on the web, as well as the various approaches service providers and developers of voice portal solutions can follow to maximize this exciting new market opportunity.
For a voice portal to function, one of the most important technology we have to include is a good VUI (Voice User Interface).There has been a great deal of development in the field of interaction between human voice and the system. And there are many other fields they have started to get implemented. Like insurance has turned to interactive voice response (IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff, and cut overall service costs. The promise is certainly there, but how well these systems perform-and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part on the user interface. Many IVR applications use Touch-Tone interfaces-known as DTMF (dual-tone multi-frequency)-in which customers are limited to making selections from a menu. As transactions become more complex, the effectiveness of DTMF systems decreases. In fact, IVR and speech recognition consultancy Enterprise Integration Group (EIG) reports that customer utilization rates of available DTMF systems in financial services, where transactions are primarily numeric, are as high as 90 percent; in contrast, customers' use of insurers' DTMF systems is less than 40 percent. Enter some more acronyms. Automated speech recognition (ASR) is the engine that drives today's voice user interface (VUI) systems. These let customers break the 'menu barrier' and perform more complex transactions over the phone. "In many cases the increase in self-service when moving from DTMF to speech can be dramatic," said EIG president Rex Stringham.
The best VUI systems are "speaker independent"-they understand naturally spoken dialog regardless of the speaker. And that means not only local accents, but regional dialects, local phrases such as "pop" versus "soda," people who talk fast (you know who you are), and all the
various nuances of speech. Those nuances are good for human beings; they allow us to recognize each other by voice. For computers, however, they make the process much more difficult. That's why a handheld or pocket computer still needs a stylus, and why the 'voice dialing' offered by some cell-phone companies still seems high-tech.Voice recognition is tough. And sophisticated packages not only can recognize a wide variety of speakers, they also allow experienced users to interrupt menu prompts ("barge-in") and can capture compound
instructions such as "I'd like to transfer a thousand dollars from checking to savings" in one command rather than several.
These features are designed to not only overcome limitations of DTMF but to increase customer use and acceptance of IVR systems. The hope is that customers will eventually be comfortable telling a machine "I want to add a driver to my Camry's policy." Besides taking some of the load off customer service representatives, VUI vendors promise an attractive ROI to help get these systems into insurers' IT budgets. ASR systems can be enabled with voice authentication, eliminating the need for PINs and passwords. Call centers themselves will likely transform into units designed to support customers regardless of whether contact comes from a telephone, the Web, e-mail, or a wireless device. At the same time, the 'voice Web' is evolving, where browsers or Wireless Application Protocol (WAP)-enabled devices display information based on what the user vocally asks for. "We're definitely headed toward multi-modal applications," Ehrlich predicts. ASR vendors are working to make sure that VUI evolves to free staff from dealing with voice-related channels; it's better to have them supporting the various modes of service that are just now beginning to emerge.
Voice- over an office, wireless, or home phone. Here we would describe the various technology factors that are making voice portal the next big opportunity on the web, as well as the various approaches service providers and developers of voice portal solutions can follow to maximize this exciting new market opportunity.
For a voice portal to function, one of the most important technology we have to include is a good VUI (Voice User Interface).There has been a great deal of development in the field of interaction between human voice and the system. And there are many other fields they have started to get implemented. Like insurance has turned to interactive voice response (IVR) systems to provide telephonic customer self-service, reduce the load on call-center staff, and cut overall service costs. The promise is certainly there, but how well these systems perform-and, ultimately, whether customers leave the system satisfied or frustrated-depends in large part on the user interface. Many IVR applications use Touch-Tone interfaces-known as DTMF (dual-tone multi-frequency)-in which customers are limited to making selections from a menu. As transactions become more complex, the effectiveness of DTMF systems decreases. In fact, IVR and speech recognition consultancy Enterprise Integration Group (EIG) reports that customer utilization rates of available DTMF systems in financial services, where transactions are primarily numeric, are as high as 90 percent; in contrast, customers' use of insurers' DTMF systems is less than 40 percent. Enter some more acronyms. Automated speech recognition (ASR) is the engine that drives today's voice user interface (VUI) systems. These let customers break the 'menu barrier' and perform more complex transactions over the phone. "In many cases the increase in self-service when moving from DTMF to speech can be dramatic," said EIG president Rex Stringham.
The best VUI systems are "speaker independent"-they understand naturally spoken dialog regardless of the speaker. And that means not only local accents, but regional dialects, local phrases such as "pop" versus "soda," people who talk fast (you know who you are), and all the
various nuances of speech. Those nuances are good for human beings; they allow us to recognize each other by voice. For computers, however, they make the process much more difficult. That's why a handheld or pocket computer still needs a stylus, and why the 'voice dialing' offered by some cell-phone companies still seems high-tech.Voice recognition is tough. And sophisticated packages not only can recognize a wide variety of speakers, they also allow experienced users to interrupt menu prompts ("barge-in") and can capture compound
instructions such as "I'd like to transfer a thousand dollars from checking to savings" in one command rather than several.
These features are designed to not only overcome limitations of DTMF but to increase customer use and acceptance of IVR systems. The hope is that customers will eventually be comfortable telling a machine "I want to add a driver to my Camry's policy." Besides taking some of the load off customer service representatives, VUI vendors promise an attractive ROI to help get these systems into insurers' IT budgets. ASR systems can be enabled with voice authentication, eliminating the need for PINs and passwords. Call centers themselves will likely transform into units designed to support customers regardless of whether contact comes from a telephone, the Web, e-mail, or a wireless device. At the same time, the 'voice Web' is evolving, where browsers or Wireless Application Protocol (WAP)-enabled devices display information based on what the user vocally asks for. "We're definitely headed toward multi-modal applications," Ehrlich predicts. ASR vendors are working to make sure that VUI evolves to free staff from dealing with voice-related channels; it's better to have them supporting the various modes of service that are just now beginning to emerge.
iDEN (Integrated Digital Enhanced Network )
iDEN is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access TDMA. Notably, iDEN is designed, and licensed, to operate on individual frequencies that may not be contiguous. iDEN operates on 25kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (IS-54 and IS-136) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz,and is capable of serving the same number of subscribers per channel as iDEN. iDEN supports either three or six interconnect users (phone users) per channel, and either six or twelve dispatch users (push-to-talk users) per channel. Since there is no Analogue component of iDEN, mechanical duplexing in the handset is unnecessary, so Time Domain Duplexing is used instead, the same way that other digital-only technolgies duplex their handsets. Also, like other digital-only technologies, hybrid or cavity duplexing is used at the Base Station (Cellsite).
iDEN technology is a highly innovative, cutting-edge system of technologies developed by Motorola to create an ideal, complete wireless communications system for today's fast-paced, busy lifestyle. Advanced capabilities bring together the features of dispatch radio, full-duplex telephone interconnect, short messaging service and data transmission.
iDEN technology offers you more than just a wireless phone. It's a Motorola complete communications system that you hold in your hand. Combining speakerphone, voice command, phone book, voice mail, digital two-way radio, mobile Internet and e-mail, wireless modems, voice activation, and voice recordings so that you can virtually recreate your office on the road.
iDEN technology is a highly innovative, cutting-edge system of technologies developed by Motorola to create an ideal, complete wireless communications system for today's fast-paced, busy lifestyle. Advanced capabilities bring together the features of dispatch radio, full-duplex telephone interconnect, short messaging service and data transmission.
iDEN technology offers you more than just a wireless phone. It's a Motorola complete communications system that you hold in your hand. Combining speakerphone, voice command, phone book, voice mail, digital two-way radio, mobile Internet and e-mail, wireless modems, voice activation, and voice recordings so that you can virtually recreate your office on the road.
WAVELENGTH ROUTING IN OPTICAL NETWORKS
Optical networks are high-capacity telecommunications networks based on optical technologies and component that provide routing, grooming, and restoration at the wavelength level as well as wavelength-based services. This paper deals with the twin concepts of wavelength routing and wavelength conversion in optical networks. The paper talks about the various categories of wavelength switches, routing algorithms, wavelength conversion and categories of wavelength conversion. Finally this paper deals with industry related issues like the gap between research and the industry, current and projected market for optical networking & DWDM equipment and future direction of research in this field.
An optical network consists of wavelength routers and end nodes that are connected by links in pairs. The wavelength-routing switches or routing nodes are interconnected by optical fibers. Although each link can support many signals, it is required that the signals be of distinct wavelengths. Routers transmit signals on the same wavelength on which they are received. An All-Optical wavelength routed network is that wavelength-routed network that carries data across from one access station to another without any O/E (Optical/Electronic) conversions.
An optical network consists of wavelength routers and end nodes that are connected by links in pairs. The wavelength-routing switches or routing nodes are interconnected by optical fibers. Although each link can support many signals, it is required that the signals be of distinct wavelengths. Routers transmit signals on the same wavelength on which they are received. An All-Optical wavelength routed network is that wavelength-routed network that carries data across from one access station to another without any O/E (Optical/Electronic) conversions.
Design Technique for Voice Browsers
Browser technology is changing very fast these days and we are moving from the visual paradigm to the voice paradigm. Voice browser is the technology to enter this paradigm. A voice browser is a “device which interprets a (voice) markup language and is capable of generating voice output and/or interpreting voice input, and possibly other input/output modalities. "This paper describes the requirements for two forms of character-set grammar, as a matter of preference or implementation, one is more easily read by (most) humans, while the other is geared toward machine generation.
A voice browser is a “device which interprets a (voice) markup language and is capable of generating voice output and/or interpreting voice input, and possibly other input/output modalities." The definition of a voice browser, above, is a broad one. The fact that the system deals with speech is obvious given the first word of the name, but what makes a software system that interacts with the user via speech a "browser"? The information that the system uses (for either domain data or dialog flow) is dynamic and comes somewhere from the Internet. From an end-user's perspective, the impetus is to provide a service similar to what graphical browsers of HTML and related technologies do today, but on devices that are not equipped with full-browsers or even the screens to support them. This situation is only exacerbated by the fact that much of today's content depends on the ability to run scripting languages and 3rd-party plug-ins to work correctly.
Much of the efforts concentrate on using the telephone as the first voice browsing device. This is not to say that it is the preferred embodiment for a voice browser, only that the number of access devices is huge, and because it is at the opposite end of the graphical-browser continuum, which high lights the requirements that make a speech interface viable. By the first meeting it was clear that this scope-limiting was also needed in order to make progress, given that there are significant challenges in designing a system that uses or integrates with existing content, or that automatically scales to the features of various access devices.
A voice browser is a “device which interprets a (voice) markup language and is capable of generating voice output and/or interpreting voice input, and possibly other input/output modalities." The definition of a voice browser, above, is a broad one. The fact that the system deals with speech is obvious given the first word of the name, but what makes a software system that interacts with the user via speech a "browser"? The information that the system uses (for either domain data or dialog flow) is dynamic and comes somewhere from the Internet. From an end-user's perspective, the impetus is to provide a service similar to what graphical browsers of HTML and related technologies do today, but on devices that are not equipped with full-browsers or even the screens to support them. This situation is only exacerbated by the fact that much of today's content depends on the ability to run scripting languages and 3rd-party plug-ins to work correctly.
Much of the efforts concentrate on using the telephone as the first voice browsing device. This is not to say that it is the preferred embodiment for a voice browser, only that the number of access devices is huge, and because it is at the opposite end of the graphical-browser continuum, which high lights the requirements that make a speech interface viable. By the first meeting it was clear that this scope-limiting was also needed in order to make progress, given that there are significant challenges in designing a system that uses or integrates with existing content, or that automatically scales to the features of various access devices.
REAL TIME IMAGE PROCESSING APPLIED TO TRAFFIC – QUEUE DETECTION ALGORITHM
This paper primarily aims at the new technique of video image processing used to solve problems associated with the real-time road traffic control systems. There is a growing demand for road traffic data of all kinds. Increasing congestion problems and problems associated with existing detectors spawned an interest in such new vehicle detection technologies. But the systems have difficulties with congestion, shadows and lighting transitions.
Problem concerning any practical image processing application to road traffic is the fact that real world images are to be processed in real time. Various algorithms, mainly based on back ground techniques, have been developed for this purposes since back ground based algorithms are very sensitive to ambient lighting conditions, they have not yielded the expected results. So a real-time image tracking approach using edged detection techniques was developed for detecting vehicles under these trouble-posing conditions.
This paper will give a general overview of the image processing technique used in analysis of video images, problems associated with it, methods of vehicle detection and tracking, pre-processing techniques and the paper also presents the real-time image processing technique used to measure traffic queue parameters.
Problem concerning any practical image processing application to road traffic is the fact that real world images are to be processed in real time. Various algorithms, mainly based on back ground techniques, have been developed for this purposes since back ground based algorithms are very sensitive to ambient lighting conditions, they have not yielded the expected results. So a real-time image tracking approach using edged detection techniques was developed for detecting vehicles under these trouble-posing conditions.
This paper will give a general overview of the image processing technique used in analysis of video images, problems associated with it, methods of vehicle detection and tracking, pre-processing techniques and the paper also presents the real-time image processing technique used to measure traffic queue parameters.
Speech Recognition
Language is man's most important means of communication and speech its primary medium. Speech provides an international forum for communication among researchers in the disciplines that contribute to our understanding of the production, perception, processing, learning and use. Spoken interaction both between human interlocutors and between humans and machines is inescapably embedded in the laws and conditions of Communication, which comprise the encoding and decoding of meaning as well as the mere transmission of messages over an acoustical channel. Here we deal with this interaction between the man and machine through synthesis and recognition applications.
The paper dwells on the speech technology and conversion of speech into analog and digital waveforms which is understood by the machines
Speech recognition, or speech-to-text, involves capturing and digitizing the sound waves, converting them to basic language units or phonemes, constructing words from phonemes, and contextually analyzing the words to ensure correct spelling for words that sound alike. Speech Recognition is the ability of a computer to recognize general, naturally flowing utterances from a wide variety of users. It recognizes the caller's answers to move along the flow of the call.
We have emphasized on the modeling of speech units and grammar on the basis of Hidden Markov Model. Speech Recognition allows you to provide input to an application with your voice. The applications and limitations on this subject has enlightened us upon the impact of speech processing in our modern technical field.
The paper dwells on the speech technology and conversion of speech into analog and digital waveforms which is understood by the machines
Speech recognition, or speech-to-text, involves capturing and digitizing the sound waves, converting them to basic language units or phonemes, constructing words from phonemes, and contextually analyzing the words to ensure correct spelling for words that sound alike. Speech Recognition is the ability of a computer to recognize general, naturally flowing utterances from a wide variety of users. It recognizes the caller's answers to move along the flow of the call.
We have emphasized on the modeling of speech units and grammar on the basis of Hidden Markov Model. Speech Recognition allows you to provide input to an application with your voice. The applications and limitations on this subject has enlightened us upon the impact of speech processing in our modern technical field.
Optical Computers
Computers have enhanced human life to a great extent.The goal of improving on computer speed has resulted in the development of the Very Large Scale Integration (VLSI) technology with smaller device dimensions and greater complexity. ¬¬¬
VLSI technology has revolutionized the electronics industry and additionally, our daily lives demand solutions to increasingly sophisticated and complex problems, which requires more speed and better performance of computers.
For these reasons, it is unfortunate that VLSI technology is approaching its fundamental limits in the sub-micron miniaturization process. It is now possible to fit up to 300 million transistors on a single silicon chip. As per the Moore’¬s law it is also estimated that the number of transistor switches that can be put onto a chip doubles every 18 months. Further miniaturization of lithography introduces several problems such as dielectric breakdown, hot carriers, and short channel effects. All of these factors combine to seriously degrade device reliability. Even if developing technology succeeded in temporarily overcoming these physical problems, we will continue to face them as long as increasing demands for higher integration continues. Therefore, a dramatic solution to the problem is needed, and unless we gear our thoughts toward a totally different pathway, we will not be able to further improve our computer performance for the future.
Optical interconnections and optical integrated circuits will provide a way out of these limitations to computational speed and complexity inherent in conventional electronics. Optical computers will use photons traveling on optical fibers or thin films instead of electrons to perform the appropriate functions. In the optical computer of the future, electronic circuits and wires will be replaced by a few optical fibers and films, making the systems more efficient with no interference, more cost effective, lighter and more compact. Optical components would not need to have insulators as those needed between electronic components because they don’t experience cross talk. Indeed, multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously.
VLSI technology has revolutionized the electronics industry and additionally, our daily lives demand solutions to increasingly sophisticated and complex problems, which requires more speed and better performance of computers.
For these reasons, it is unfortunate that VLSI technology is approaching its fundamental limits in the sub-micron miniaturization process. It is now possible to fit up to 300 million transistors on a single silicon chip. As per the Moore’¬s law it is also estimated that the number of transistor switches that can be put onto a chip doubles every 18 months. Further miniaturization of lithography introduces several problems such as dielectric breakdown, hot carriers, and short channel effects. All of these factors combine to seriously degrade device reliability. Even if developing technology succeeded in temporarily overcoming these physical problems, we will continue to face them as long as increasing demands for higher integration continues. Therefore, a dramatic solution to the problem is needed, and unless we gear our thoughts toward a totally different pathway, we will not be able to further improve our computer performance for the future.
Optical interconnections and optical integrated circuits will provide a way out of these limitations to computational speed and complexity inherent in conventional electronics. Optical computers will use photons traveling on optical fibers or thin films instead of electrons to perform the appropriate functions. In the optical computer of the future, electronic circuits and wires will be replaced by a few optical fibers and films, making the systems more efficient with no interference, more cost effective, lighter and more compact. Optical components would not need to have insulators as those needed between electronic components because they don’t experience cross talk. Indeed, multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously.
Mobile Adhoc Network (MANET)
The term MANET (Mobile Adhoc Network)refers to a multihop packet based wireless network composed of a set of mobile nodes that can communicate and move at the same time , without using any kind of fixed wired infrastructure. MNET are actually self organizing and adaptive networks that can be formed and deformed on-the-fly without the need of any centralized administration.
As for other packet data networks, one –to-one communication in a MANET is achieved by unicast routing each single packet. Routing in MANET is challenging due to the constraints existing on the transmission bandwidth battery power and CPU time and the requirement to cope with the frequent topological changes resulting from the mobility of the nodes. Nodes of a MANET cooperate in the task of routing packets to destination nodes since each node of the network is able to communicate only with those nodes located within its transmission radius R, while the source and destination nodes can be located at a distance much higher than R.
A first attempt to cope with the mobility is to use the specific techniques aimed to tailoring the conventional routing protocols to the mobile environment while preserving their nature. For this reason the protocol designed around such techniques are referred to as table-driven or proactive protocols. To guarantee that routing tables are up to date and reflect the actual network topology, nodes running a protocol continuously exchange route updates and re calculate paths to all possible destinations. The main advantage of the proactive protocols is that a route is immediately available when is needed for data transmissions.
A different approach in the design of the routing protocol is to calculate a path only when it is necessary for data transmission. These types of protocols are called as the reactive protocols or on-demand routing protocols. A reactive protocol is characterized by a path discovery procedure and a maintenance procedure. Path discovery is based on a query –reply cycle that adopts flooding of queries. The destination is eventually reached by the query an at least one reply is generated. Path discovery procedure is called when there is a need for data transmission and the source does not the path to the destination. Discovered paths are maintained by the route maintance procedure until they are no longer in used.
As for other packet data networks, one –to-one communication in a MANET is achieved by unicast routing each single packet. Routing in MANET is challenging due to the constraints existing on the transmission bandwidth battery power and CPU time and the requirement to cope with the frequent topological changes resulting from the mobility of the nodes. Nodes of a MANET cooperate in the task of routing packets to destination nodes since each node of the network is able to communicate only with those nodes located within its transmission radius R, while the source and destination nodes can be located at a distance much higher than R.
A first attempt to cope with the mobility is to use the specific techniques aimed to tailoring the conventional routing protocols to the mobile environment while preserving their nature. For this reason the protocol designed around such techniques are referred to as table-driven or proactive protocols. To guarantee that routing tables are up to date and reflect the actual network topology, nodes running a protocol continuously exchange route updates and re calculate paths to all possible destinations. The main advantage of the proactive protocols is that a route is immediately available when is needed for data transmissions.
A different approach in the design of the routing protocol is to calculate a path only when it is necessary for data transmission. These types of protocols are called as the reactive protocols or on-demand routing protocols. A reactive protocol is characterized by a path discovery procedure and a maintenance procedure. Path discovery is based on a query –reply cycle that adopts flooding of queries. The destination is eventually reached by the query an at least one reply is generated. Path discovery procedure is called when there is a need for data transmission and the source does not the path to the destination. Discovered paths are maintained by the route maintance procedure until they are no longer in used.
GREEN CLOUD
Cloud computing is offering utility oriented IT services to users world wide. It enables hosting of applications from consumer,scientific and business domains. However data centres hosting cloud computing applications consume huge amounts of energy,contributing to high operational costs and carbon footprints to the environment. With energy shortages and global climate change leading our concerns these days, the power consumption of data centers has become a key issue. Therefore,we need green cloud computing solutions that can not only save energy,but also reduce operational costs. The vision for energy efficient management of cloud computing environments is presented here. A green scheduling algorithm which works by powering down servers when they are not in use is also presented.
In 1969, Leonard Kleinrock , one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) which seeded the Internet, said: “As of now, computernetworks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of „computer utilities‟ which, like present electric and telephone utilities, will service individual homes and offices across the country.” This vision of computing utilities based on a service provisioning model anticipated the massive transformation of the entire computing industry in the 21st century whereby computing services will be readily available on demand, like other utility services available in today’s society. Similarly, users (consumers) need to pay providers only when they access the computing services. In addition, consumers no longer need to invest heavily or encounter difficulties in building and maintaining complex IT infrastructure.
In 1969, Leonard Kleinrock , one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET) which seeded the Internet, said: “As of now, computernetworks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of „computer utilities‟ which, like present electric and telephone utilities, will service individual homes and offices across the country.” This vision of computing utilities based on a service provisioning model anticipated the massive transformation of the entire computing industry in the 21st century whereby computing services will be readily available on demand, like other utility services available in today’s society. Similarly, users (consumers) need to pay providers only when they access the computing services. In addition, consumers no longer need to invest heavily or encounter difficulties in building and maintaining complex IT infrastructure.
PHISING
In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic communication. Phishing is a fraudulent e-mail that attempts to get you to divulge personal data that can then be used for illegitimate purposes.
There are many variations on this scheme. It is possible to Phish for other information in additions to usernames and passwords such as credit card numbers, bank account numbers, social security numbers and mothers’ maiden names. Phishing presents direct risks through the use of stolen credentials and indirect risk to institutions that conduct business on line through erosion of customer confidence. The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss.
This report also concerned with anti-phishing techniques. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. No single technology will completely stop phishing. However a combination of good organization and practice, proper application of current technologies and improvements in security technology has the potential to drastically reduce the prevalence of phishing and the losses suffered from it. Anti-phishing software and computer programs are designed to prevent the occurrence of phishing and trespassing on confidential information. Anti-phishing software is designed to track websites and monitor activity; any suspicious behavior can be automatically reported and even reviewed as a report after a period of time.
This also includes detecting phishing attacks, how to prevent and avoid being scammed, how to react when you suspect or reveal a phishing attack and what you can do to help stop phishers.
There are many variations on this scheme. It is possible to Phish for other information in additions to usernames and passwords such as credit card numbers, bank account numbers, social security numbers and mothers’ maiden names. Phishing presents direct risks through the use of stolen credentials and indirect risk to institutions that conduct business on line through erosion of customer confidence. The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss.
This report also concerned with anti-phishing techniques. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. No single technology will completely stop phishing. However a combination of good organization and practice, proper application of current technologies and improvements in security technology has the potential to drastically reduce the prevalence of phishing and the losses suffered from it. Anti-phishing software and computer programs are designed to prevent the occurrence of phishing and trespassing on confidential information. Anti-phishing software is designed to track websites and monitor activity; any suspicious behavior can be automatically reported and even reviewed as a report after a period of time.
This also includes detecting phishing attacks, how to prevent and avoid being scammed, how to react when you suspect or reveal a phishing attack and what you can do to help stop phishers.
Remote Frame Buffer (RFB) Protocol
Remote Desktop Softwaresare those softwares which provide Remote Access. Remote Access means the ability of a user to log onto a remote computer or network from a distant location. This usually comprises of computers, a Network, and some remote accesssoftware to connect to the network. Since this involves clients and servers connected across a network,a protocol is essential for efficient communication between them. RFB protocol is one of such protocols which is used by the client and servers for communicating with each other and thereby making Remote Access possible. The purpose of this Paper is to give a general idea as to how this Protocol actually works. This paper also gives a broad idea about the various messages of this protocol and how these messages are send and interpreted by the client and server modules. This Paper also includes a Simple implementation of the Protocol which shows the various messages and methods and how this protocol is practically used for gaining remote access.
RFB (remote framebuffer) is a simple and efficient protocol which provide remote access to graphical user interfaces.As its Name Suggests it works at the framebuffer level and thus it is applicable to all windowing systems and applications. Eg. X11, Windows and Macintosh. It should also be noted that there are other Protocols available and RFB is the protocol which is used in Virtual Network Computing (VNC) and its various forms. Due to increase in number ofSoftware products and Services such protocols play a very important role nowadays.
RFB (remote framebuffer) is a simple and efficient protocol which provide remote access to graphical user interfaces.As its Name Suggests it works at the framebuffer level and thus it is applicable to all windowing systems and applications. Eg. X11, Windows and Macintosh. It should also be noted that there are other Protocols available and RFB is the protocol which is used in Virtual Network Computing (VNC) and its various forms. Due to increase in number ofSoftware products and Services such protocols play a very important role nowadays.
Transparent Electronics
Transparent electronics is an emerging science and technology field focused on producing ‘invisible’ electronic circuitry and opto-electronic devices. Applications include consumer electronics, new energy sources, and transportation; for example, automobile windshields could transmit visual information to the driver. Glass in almost any setting could also double as an electronic device, possibly improving security systems or offering transparent displays. In a similar vein, windows could be used to produce electrical power. Other civilian and military applications in this research field include realtime wearable displays.
As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.
As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.
PILL CAMERA
The aim of technology is to make products in a large scale for cheaper prices and increased quality. The current technologies have attained a part of it, but the manufacturing technology is at macro level. The future lies in manufacturing product right from the molecular level. Research in this direction started way back in eighties. At that time manufacturing at molecular and atomic level was laughed about. But due to advent of nanotecnlogy we have realized it to a certain level. One such product manufactured is PILL CAMERA, which is used for the treatment of cancer, ulcer and anemia. It has made revolution in the field of medicine. This tiny capsule can pass through our body, without causing any harm.
It takes pictures of our intestine and transmits the same to the receiver of the Computeranalysis of our digestive system. This process can help in tracking any kind of disease related to digestive system. Also we have discussed the drawbacks of PILL CAMERA and how these drawbacks can be overcome using Grain sized motor and bi-directional wireless telemetrycapsule .Besides this we have reviewed the process of manufacturing products using nanotechnology.Some other important applications are also discussed along with their potential
impacts on various fields.
We have made great progress in manufacturing products. Looking back from where we stand now, we started from flint knives and stone tools and reached the stage where we make such tools with more precision than ever. The leap in technology is great but it is not going to stop here. With our present technology we manufacture products by casting, milling, grinding, chipping and the likes. With these technologies we have made more things at a lower cost and greater precision than ever before. In the manufacture of these products we have been arranging atoms in great thundering statistical herds. All of us know manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange atoms in dirt, water and air we get grass. The next step in manufacturing technology is to manufacture products at molecular level. The technology used to achieve manufacturing at molecular level is “NANOTECHNOLOGY”. Nanotechnology is the creation of useful materials, devices and system through manipulation of such miniscule matter (nanometer).Nanotechnology deals with objects measured in nanometers. Nanometer can be visualized as billionth of a meter or millionth of a millimeter or it is 1/80000 width of human hair.
It takes pictures of our intestine and transmits the same to the receiver of the Computeranalysis of our digestive system. This process can help in tracking any kind of disease related to digestive system. Also we have discussed the drawbacks of PILL CAMERA and how these drawbacks can be overcome using Grain sized motor and bi-directional wireless telemetrycapsule .Besides this we have reviewed the process of manufacturing products using nanotechnology.Some other important applications are also discussed along with their potential
impacts on various fields.
We have made great progress in manufacturing products. Looking back from where we stand now, we started from flint knives and stone tools and reached the stage where we make such tools with more precision than ever. The leap in technology is great but it is not going to stop here. With our present technology we manufacture products by casting, milling, grinding, chipping and the likes. With these technologies we have made more things at a lower cost and greater precision than ever before. In the manufacture of these products we have been arranging atoms in great thundering statistical herds. All of us know manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange atoms in dirt, water and air we get grass. The next step in manufacturing technology is to manufacture products at molecular level. The technology used to achieve manufacturing at molecular level is “NANOTECHNOLOGY”. Nanotechnology is the creation of useful materials, devices and system through manipulation of such miniscule matter (nanometer).Nanotechnology deals with objects measured in nanometers. Nanometer can be visualized as billionth of a meter or millionth of a millimeter or it is 1/80000 width of human hair.
Subscribe to:
Posts (Atom)