SEMINAR TOPICS
This blog is especially for computer science & engineering students.
Sunday, July 10, 2011
Autonomic Computing
“Autonomic Computing” is a new vision of computing initiated by IBM. This new paradigm shifts the fundamental definition of the technology age from one of computing, to one defined by data. Access to data from multiple, distributed sources, in addition to traditional centralized storage devices will allow users to transparently access information when and where they need it. At the same time, this new view of computing will necessitate changing the industry's focus on processing speed and storage to one of developing distributed networks that are largely self-managing, self-diagnostic, and transparent to the user.
The term autonomic is derived from human biology. The autonomic nervous system monitors our heartbeat, checks our blood sugar level and keeps our body temperature close to 98.6 °F, without any conscious effort on our part. In much the same way, autonomic computing components anticipate computer system needs and resolve problems —with minimal human intervention. However, there is an important distinction between autonomic activity in the human body and autonomic responses in computer systems. Many of the decisions made by autonomic elements in the body are involuntary, whereas autonomic elements in computer systems make decisions based on tasks you choose to delegate to the technology. In other words, adaptable policy — rather than rigid hard coding determines the types of decisions and actions autonomic elements make in computer systems.
Ref: http://seminars4you.info/ComputerScience.html
Brain Finger Printing
Brain finger printing is based on finding that the brain generates a unique brain wave pattern when a person encounters a familiar stimulus Use of functional magnetic resonance imaging in lie detection derives from studies suggesting that persons asked to lie show different patterns of brain activity than they do when being truthful. Issues related to the use of such evidence in courts are discussed. The author concludes that neither approach is currently supported by enough data regarding its accuracy in detecting deception to warrant use in court.
In the field of criminology, a new lie detector has been developed in the United States of America. This is called “brain finger printing”. This invention is supposed to be the best lie detector available as on date and is said to detect even smooth criminals who pass the polygraph test (the conventional lie detector test) with ease. The new method employs brain waves, which are useful in detecting whether the person subjected to the test, remembers finer details of the crime. Even if the person willingly suppresses the necessary information, the brain wave is sure to trap him, according to the experts, who are very excited about the new kid on the block.
Brain Finger printing is designed to determine whether an individual recognizes specific information related to an event or activity by measuring electrical brain wave responses to words, phrases, or pictures presented on a computer screen. The technique can be applied only in situations where investigators have a sufficient amount of specific information about an event or activity that would be known only to the perpetrator and investigator. In this respect, Brain Fingerprinting is considered a type of Guilty Knowledge Test, where the "guilty" party is expected to react strongly to the relevant detail of the event of activity.
Existing (polygraph) procedures for assessing the validity of a suspect's "guilty" knowledge rely on measurement of autonomic arousal (e.g., palm sweating and heart rate), while Brain Fingerprinting measures electrical brain activity via a fitted headband containing special sensors. Brain Fingerprinting is said to be more accurate in detecting "guilty" knowledge distinct from the false positives of traditional polygraph methods, but this is hotly disputed by specialized researchers.
Ref: http://seminars4you.info/ComputerScience.html
MICROSOFT PALLADIUM
The Next-Generation Secure Computing Base (NGSCB), formerly known as Palladium, is a softwarearchitecture designed by Microsoft which is expected to implement "Trusted Computing" concept on future versions of the Microsoft Windows operating system. Palladium is part of Microsoft's Trustworthy Computing initiative. Microsoft's stated aim for
palladium is to increase the security and privacy of computer users. Palladium involves a new
breed of hardware and applications in along with the architecture of the Windows operating system. Designed to work side-by-side with the existing functionality of Windows, this
significant evolution of the personal computer platform will introduce a level of security that
meets the rising customer requirements for data protection, integrity and distributed
collaboration. It's designed to give people greater security, personal privacy and system integrity.
Internet security is also provided by palladium such as protecting data from virus and hacking of
data In addition to new core components in Windows that will move the Palladium effort forward, Microsoft is working with hardware partners to build Palladium components and features into their products. The new hardware architecture involves some changes to CPUs which are significant from a functional perspective. There will also be a new piece of hardware called for by Palladium that you might refer to as a security chip. It will provide a set of cryptographic functions and keys that are central to what we're doing. There are also some associated changes under the chipset, and the graphics and I/O system through the USB port--all
designed to create a comprehensive security environment.
"Palladium" is the code name for an evolutionary set of features for the Microsoft Windows operating system. When combined with a new breed of hardware and applications, "Palladium" gives individuals and groups of users greater data security, personal privacy and system integrity. Designed to work side-by-side with the existing functionality of Windows, this significant evolution of the personal computer platform will introduce a level of security that meets the rising customer requirements for data protection, integrity and distributed collaboration .
ASYMMETRIC DIGITAL SUBSCRIBER LINE (ADSL)
Digital Subscriber Lines (DSL) are used to deliver high-rate digital data over existing ordinary phone-lines. A new modulation technology called Discrete Multitone (DMT) allows the transmission of high speed data. DSL facilitates the simultaneous use of normal telephone services, ISDN, and high speed data transmission, e.g., video. DMT-based DSL can be seen as the transition from existing copper-lines to the future fiber-cables. This makes DSL economically interesting for the local telephone companies. They can offer customers high speed data services even before switching to fiber-optics.
DSL is a newly standardized transmission technology facilitating simultaneous use of normal telephone services, data transmission of 6 M bit/s in the downstream and Basic- rate Access (BRA). DSL can be seen as a FDM system in which the available bandwidth of a single copper-loop is divided into three parts. The base band occupied by POTS is split from the data channels by using a method which guarantees POTS services in the case of ADSL-system failure (e.g. passive filters).
Thursday, April 28, 2011
4G Wireless Technology
As the virtual centre of excellence in mobile and personal communications (Mobile VCE) moves into its second core research programme it has been decided to set up a fourth generation (4G) visions group aimed at harmonising the research work across the work areas and amongst the numerous researchers working on the programme. This paper outlines the initial work of the group and provides a start to what will become an evolving vision of 4G. A short history of previous generations of mobile communications systems and a discussion of the limitations of third generation (3G) systems are followed by a vision of 4G for 2010 based on five elements: fully converged services, ubiquitous mobile access, diverse user devices, autonomous networks and software dependency. This vision is developed in more detail from a technology viewpoint into the key areas of networks and services, software systems and wireless access.
The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.
The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.
Fiber Distributed Data Interface (FDDI)
The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic cable. FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and greater distances than copper. It should be noted that relatively recently, a related copper specification, called Copper Distributed Data Interface (CDDI) has emerged to provide 100-Mbps service over copper. CDDI is the implementation of FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI specifications and operations, but it also provides a high-level overview of CDDI. FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions (called counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmission, and the secondary ring remains idle. The primary purpose of the dual rings, as will be discussed in detail later in this chapter, is to provide superior reliability and robustness.
FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version.
FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version.
Hyper LAN
Recently, demand for high-speed Internet access is rapidly increasing and a lot of people enjoy broadband wired Internet access services using ADSL ( Asymmetric Digital Subscriber Line) or cable modems at home. On the other hand , the cellular phone is getting very popular and users enjoy its location-free and wire-free services. The cellular phone also enables people to connect their laptop computers to the Internet in location-free and wire-free manners. However ,present cellular systems like GSM (Global System for Mobile communications) can provide much lower data rates compared with those provided by the wired access systems, over a few Mbps(Mega bit per second).Even in the next generation cellular system, UMTS ( Universal Mobile Telecommunications System), the maximum data rate of its initial service is limited up to 384 kbps; therefore even UMTS cannot satisfy users’ expectation of high-speed wireless Internet access. Hence, recently, Mobile Broadband System (MBS) is getting popular and important and wireless LAN (Local Area Network) such as ETSI (European Telecommunication Standardization Institute) standard HIPERLAN (High PErformance Radio Local Area Network) type2 (denoted as H/2) is regarded as a key towards providing high speed wireless access in MBS. H/2 aims at providing high speed multimedia services, security of services , handover when roaming between local and wide area as well as between corporate and public networks. It also aims at providing increased throughput of datacom as well as video streaming applications. It operates in the 5 GHz band with a 100 MHz spectrum. WLAN is W-ATM based and is designed to extend the services of fixed ATM networks to mobile users. H/2 is connection oriented with a connection duration of 2 ms or multiples of that. Connections over the air are time-division multiplexed . H/2 allows interconnection into virtually any type of fixed network technology and can carry Ethernet frames, ATM cells and IP packets. Follows dynamic frequency allocation. Offers bit rates of 54 Mbps.
Subscribe to:
Posts (Atom)