Thursday, April 28, 2011

4G Wireless Technology

As the virtual centre of excellence in mobile and personal communications (Mobile VCE) moves into its second core research programme it has been decided to set up a fourth generation (4G) visions group aimed at harmonising the research work across the work areas and amongst the numerous researchers working on the programme. This paper outlines the initial work of the group and provides a start to what will become an evolving vision of 4G. A short history of previous generations of mobile communications systems and a discussion of the limitations of third generation (3G) systems are followed by a vision of 4G for 2010 based on five elements: fully converged services, ubiquitous mobile access, diverse user devices, autonomous networks and software dependency. This vision is developed in more detail from a technology viewpoint into the key areas of networks and services, software systems and wireless access.

The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

Fiber Distributed Data Interface (FDDI)

The Fiber Distributed Data Interface (FDDI) specifies a 100-Mbps token-passing, dual-ring LAN using fiber-optic cable. FDDI is frequently used as high-speed backbone technology because of its support for high bandwidth and greater distances than copper. It should be noted that relatively recently, a related copper specification, called Copper Distributed Data Interface (CDDI) has emerged to provide 100-Mbps service over copper. CDDI is the implementation of FDDI protocols over twisted-pair copper wire. This chapter focuses mainly on FDDI specifications and operations, but it also provides a high-level overview of CDDI. FDDI uses a dual-ring architecture with traffic on each ring flowing in opposite directions (called counter-rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the primary ring is used for data transmission, and the secondary ring remains idle. The primary purpose of the dual rings, as will be discussed in detail later in this chapter, is to provide superior reliability and robustness.

FDDI was developed by the American National Standards Institute (ANSI) X3T9.5 standards
committee in the mid-1980s. At the time, high-speed engineering workstations were beginning to tax the bandwidth of existing local area networks (LANs) based on Ethernet and Token Ring). A new LAN media was needed that could easily support these workstations and their new distributed applications. At the same time, network reliability had become an increasingly important issue as system managers migrated mission-critical applications from large computers to networks. FDDI was developed to fill these needs. After completing the FDDI specification, ANSI submitted FDDI to the International Organization for Standardization (ISO), which created an international version of FDDI that is completely compatible with the ANSI standard version.

Hyper LAN

Recently, demand for high-speed Internet access is rapidly increasing and a lot of people enjoy broadband wired Internet access services using ADSL ( Asymmetric Digital Subscriber Line) or cable modems at home. On the other hand , the cellular phone is getting very popular and users enjoy its location-free and wire-free services. The cellular phone also enables people to connect their laptop computers to the Internet in location-free and wire-free manners. However ,present cellular systems like GSM (Global System for Mobile communications) can provide much lower data rates compared with those provided by the wired access systems, over a few Mbps(Mega bit per second).Even in the next generation cellular system, UMTS ( Universal Mobile Telecommunications System), the maximum data rate of its initial service is limited up to 384 kbps; therefore even UMTS cannot satisfy users’ expectation of high-speed wireless Internet access. Hence, recently, Mobile Broadband System (MBS) is getting popular and important and wireless LAN (Local Area Network) such as ETSI (European Telecommunication Standardization Institute) standard HIPERLAN (High PErformance Radio Local Area Network) type2 (denoted as H/2) is regarded as a key towards providing high speed wireless access in MBS. H/2 aims at providing high speed multimedia services, security of services , handover when roaming between local and wide area as well as between corporate and public networks. It also aims at providing increased throughput of datacom as well as video streaming applications. It operates in the 5 GHz band with a 100 MHz spectrum. WLAN is W-ATM based and is designed to extend the services of fixed ATM networks to mobile users. H/2 is connection oriented with a connection duration of 2 ms or multiples of that. Connections over the air are time-division multiplexed . H/2 allows interconnection into virtually any type of fixed network technology and can carry Ethernet frames, ATM cells and IP packets. Follows dynamic frequency allocation. Offers bit rates of 54 Mbps.

Hyper Threading

  Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture. Hyper-Threading Technology makes a single physical processor appear as two logical processors; the physical execution resources are shared and the architecture state is duplicated for the two logical processors. From a software or architecture perspective, this means operating systems and user programs can schedule processes or threads to logical processors as they would on multiple physical processors. From a microarchitecture perspective, this means that instructions from both logical processors will persist and execute simultaneously on shared execution resources.

The first implementation of Hyper-Threading Technology was done on the IntelXeonprocessor MP. In this implementation there are two logical processors on each physical processor. The logical processors have their own independent architecture state, but they share nearly all the physical execution and hardware resources of the processor. The goal was to implement the technology at minimum cost while ensuring forward progress on logical processors, even if the other is stalled, and to deliver full performance even when there is only one active logical processor.

The potential for Hyper-Threading Technology is tremendous; our current implementation has only just begun to tap into this potential. Hyper-Threading Technology is expected to be viable from mobile processors to servers; its introduction into market segments other than servers is only gated by the availability and prevalence of threaded applications and workloads in those markets.

OVONIC UNIFIED MEMORY

Ovonic unified memory (OUM) is an advanced memory technology that uses a chalcogenide alloy (GeSbTe).The alloy has two states: a high resistance amorphous state and a low resistance polycrystalline state. These states are used for the representation of reset and set states respectively. The performance and attributes of the memory make it an attractive alternative to flash memory and potentially competitive with the existing non volatile memory technology.

Almost 25% of the world wide chip markets are memory devices, each type used for their specific advantages: the high speed of an SRAM, the high integration density of a DRAM, or the nonvolatile capability of a FLASH memory device.The industry is searching for a holy grail of future memory technologies to service the upcoming market of portable and wireless devices. These applications are already available based on existing memory technology, but for a successful market penetration. A higher performance at a lower price is required.

The existing technologies are characterized by the following limitations. DRAMs are difficult to intergrate.SRAMs are expensive. FLASH memory can have only a limited number of read and write cycles.EPROMs have high power requirement and poor flexibility.There is a growing need for nonvolatile memory technology for high density stand alone embedded CMOS application with faster write speed and higher endurance than existing nonvolatile memories. OUM is a promising technology to meet this need. R.G.Neale, D.L.Nelson, and Gorden.E.Moore originally reported a phase-change memory array based on chalcogenide materials in 1970. Improvements in phase-change materials technology subsequently paved the way for development of commercially available rewriteable CDs and DVD optical memory disks. These advances, coupled with significant technology scaling and better understanding of the fundamental electrical device operation, have motivated development of the OUM technology at the present day technology node.

Service Oriented Architecture (SOA)

SOA is a design for linking business and computational resources (principally organizations, applications and data) on demand to achieve the desired results for service consumers (which can be end users or other services). Service-orientation describes an architecture that uses loosely coupled services to support the requirements of business processes and users. Resources on a network in a SOA environment are made available as independent services that can be accessed without knowledge of their underlying platform implementation. These concepts can be applied to business, software and other types of producer/consumer systems.
The main drivers for SOA adoption are that it links computational resources and promotes their reuse. Enterprise architecture believes that SOA can help businesses respond more quickly and cost-effectively to changing market conditions. This style of architecture promotes reuse at the macro (service) level rather than micro (objects) level.
The following guiding principles define the ground rules for development, maintenance, and usage of the SOA

  • Reuse, granularity, modularity, compos ability, componentization, and interoperability
  • Compliance to standards (both common and industry-specific)
  • Services identification and categorization, provisioning and delivery, and monitoring and tracking.
One obvious and common challenge faced is managing service metadata. Another challenge is providing appropriate levels of security. Interoperability is another important aspect in the SOA implementations.

SOA implementations rely on a mesh (Mesh consists of semi-permeable barrier made of connected strands of metal, fiber, or other flexible/ductile material. Mesh is similar to web or net in that it has many attached or woven strands.)Of software services. Services comprise unassociated, loosely coupled units of functionality that have no calls to each other embedded in them. Each service implements one action, such as filling out an online application for an account, or viewing an online bank-statement, or placing an online booking or airline ticket order. Instead of services embedding calls to each other in their source code they use defined protocols that describe how services pass and parse messages, using description metadata.
SOA developers associate individual SOA objects by using orchestration (Orchestration describes the automated arrangement, coordination, and management of complex computer systems, middleware, and services.). In the process of orchestration the developer associates software functionality (the services) in a non-hierarchical arrangement (in contrast to a class hierarchy) using a software tool that contains a complete list of all available services, their characteristics, and the means to build an application utilizing these sources.

The Socket Interface

We must have an interface between the application programs and the protocol software in order to use network facilities. My seminar is on a model of an interface between application programs and TCP/IP protocols. The standard of TCP/IP protocol do not specify exactly how application programs interact with the protocol software. Thus the interface architecture is not standardized; its design lies outside of scope of the protocol suite. It is further should be noticed that it is inappropriate to tie the protocols to a particular interface because no single interface architecture works well on all systems. In particular, because protocol software resides in a computer’s operating system, interface details depend on the operating system.

In spite of lack of standards, a programmer must know about the such interfaces to be able to use TCP/IP. Although I have chosen UNIX operating system in order to explain the model, it has widely accepted and is used in many systems.One thing more, the operations that I will list here, will have no standard in any sense.