SMU MBA ASSIGNMENTS

Sikkim manipal Solved MBA Assignments, SMU MBA, Solved assignments, 1st sem, 2nd sem, 3rd sem, 4th sem, SMU MBA PROJECTS

Email Us

Seminar



RAID
Real Time Systems with Linux/RTAI
Real-Time Obstacle Avoidance
Symbian OS
Teleportation
Turbo Codes
Utility Fog
Cluster Computing
Tamper Resistance
CGI Programming
DRM Software Radio
Firewalls
Handheld Computers
Modems and ISDN
Optical Free Space Communication
Planar Separators
Wireless Internet
DVD Technology
MPEG-4 Facial Animation
Internet Telephony Policy in INDIA
PON Topologies
Smart Cards
Surround Systems
TCPA / Palladium
Trustworth
Sense-Response Applications
Optical Computer
Graph Separators
Extended Mark Up Language
Cellular Communications
Virtual Instrumentation
Voice Quality
Cable Modems
Dynamic Memory Allocation
Optical Mouse
Generic Framing Procedure
Dynamic Cache Management Technique
Ethernet Passive Optical Network
Synchronous Optical Networking
Development of the Intenet
Design and Analysis of Algoritms
Infinite Dimensional Vector Space
Instant Messaging
Optical Fibre Cable
GSM
Ambiophonics
Palladium
Computer Viruses
Third Generation
Object Oriented Concepts
Night Vision Technology
TCP/ IP
Dynamic Synchronous Transfer Mode
Integrated Voice and Data


new seminar-list2
Fast Convergence Algorithms for Active Noise Controlin Vehicles
3G vs WiFi
Data Security in Local Network using Distributed Firewalls
Computerized Paper Evaluation using Neural Network
Bluetooth Based Smart Sensor Networks
Laser Communications
Multiple Domain Orientation
M-Voting
Industrial Applications using Neural Networks
Implementation Of Zoom FFT
Image Processing
Virtual keyboard
Transient Stability Assessment using Neural Networks
Tracking and Positioning of Mobiles in Telecommunication
Timing Attacks on Implementations
Prototype System Design for Telemedicine
Optical Networking and Dense Wavelength Division Multiplexing
Optical Burst Switching
Image Processing & Compression
Facility Layout Design using Genetic Algorithm
Design of 2-D Filters using a Parallel Processor Architecture
Hyper Transport Technology
CorDECT
Storage Area Network
High Performance DSP Architectures
Smart Dust
MANET
Biometrics Based Authentication
Software-Defined Radio
Digital Video Editing
D-Blast
Aeronautical Communication
Gigabit Ethernet
Wisenet
Digital Light Processing
Free Space Laser Communications
Millipede
AC Performance Of Nanoelectronics
4G Wireless Systems
Blu Ray Disc
Smart Fabrics
Daknet
Conditional Access System
SyncML
High Performance DSP Architectures
Short Message Service (SMS)
MANET
Worldwide Inter operatibility for Microwave Access
64-Bit Computing
Bio-Molecular Computing
Code Division Duplexing
Delay Tolerant Networking
EDGE
Holographic Data Storage
Integer Fast Fourier Transform
NRAM
Orthogonal Frequency Division Multiplplexing
Ovonic Unified Memory
Pervasive Computing

List of Seminars….
Java Servlets
Smart Quill
Spam Assassin
Stereoscopic Imaging
Zigbee
Ultra-Wideband
Home Networking
Internet Protocol Television
Bacterio-Rhodopsin Memory
Blade Servers
Digital Cinema
Face Recognition Technology
FireWire
Holographic Versatile Disc
Itanium Processor
Local Multipoint Distribution Service
Network Attached Storage
Night Vision Technology
Opera (web browser)
Parasitic Computing
Performance Testing
Java Ring
RD RAM
GSM Security And Encryption

Heaps
Wireless Application Protocol
Light Tree
Virtual Private Network
Clockless Chip
Layer 3 Switching
IMode
Blue Gene
Access gateways
Computer Forensics
Direct Memory Access
Crusoe
Digital Subscriber Line
Computer Memory Based on the Protein Bacterio-rhodopsin
DNA Based Computing
Free Space Optics
Freenet
Fiber Distributed Data Interface
Dynamic Virtual Private Network
Introduction to the Internet Protocols
Graphic processing Unit
High Altitude Aeronautical Platforms
Aspect-oriented programming (Aop)
Intel MMX Technology
Hyper-Threading technology
IMAX
Brain-Computer Interface
InfiniBand
Multicast
Inverse Multiplexing
Blue Tooth
Holographic Memory
Jini Technology
Bio-metrics
Magnetic Random Access Memory
Intrution Detection System
Multiterabit Networks
Neural Networks And Their Applications
Quantum Computers
Small Computer System Interface
OpenRAN
Quadrics Interconnection Network
Plan 9 Operating System
Structured Cabling
Quantum Cryptography
Speech Application Language Tags
Real- Time Systems and Real- Time Operating Systems
Parallel Computing In India
Steganography
Virtual LAN Technology
Artificial Neural Network (ANN)
Tele-immersion
VHDL
Blue Eyes
Voice Over Internet Protocol
The Tiger SHARC processor
Autonomic Computing
Wearable Computers
Voice Portals

                                                                     



1)

AKA

AKA stands for the Authentication and Key Agreement security protocol. It is a mechanism which performs authentication and session key distribution in Universal Mobile Telecommunications System (UMTS) networks. AKA is a challenge-response based mechanism that uses symmetric cryptography. AKA is typically run in a UMTS IM Services Identity Module (ISIM), which resides on a smart card like device that also provides tamper resistant storage of shared secrets.

Exponential key exchange

The first publicly knownbpublic-key agreement protocol that meets the above criteria was the Diffie-Hellman exponential key exchange, in which two people jointly exponentiate a generator with random numbers, in such a way that an eavesdropper has no way of guessing what the key is.However, exponential key exchange in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol.

Authentication

Anonymous key exchange, like Diffie-Hellman, does not provide authentication of the parties, and is thus vulnerable to Man-in-the-middle attacks.A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as:
Public/private key pairs , Shared secret keys , Passwords

******************************************************************************************************************************************

2)JIRO Technology

*****************************************************************************************************

3)Robocode

Robocode is an Open Source educational game by Mathew Nelson (originally R was provided by IBM). It is designed to help people learn to program in Java and enjoy the experience. It is very easy to start - a simple robot can be written in just a few minutes - but perfecting a bot can take months or more. Competitors write software that controls a miniature tank that fights other identically-built (but differently programmed) tanks in a playing field. Robots move, shoot at each other, scan for each other, and hit the walls (or other robots) if they aren t careful. Though the idea of this game may seem simple, the actual strategy needed to win is not.

Good robots have hundreds of lines in their code dedicated to strategy. Some of the more successful robots use techniques such as statistical analysis and attempts at neural networks in their designs. One can test a robot against many other competitors by downloading their bytecode, so design competition is fierce. Robocode provides a security sandbox (bots are restricted in what they can do on the machine they run on) which makes this a safe thing to do.

*******************************************************************************************************

4)Signcryption

Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the functions of digital signature and public key encryption in a logically single step, and with a cost significantly lower than that required by the traditional signature and encryption approach. The main disadvantage of this approach is that, digitally signing a message and then encrypting it, consumes more machine cycles and bloats the message by introducing extended bits to it. Hence, decrypting and verifying the message at the receiver?s end, a lot of computational power is used up.

 Thus you can say that the cost of delivering a message using signing-then-encryption is in effect the sum of the costs of both digital signatures and public key encryption. Is it possible to send a message of arbitrary length with cost less than that required by signature-then-encryption? Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the functions of digital signature and public key encryption in a logically single step, and with a cost significantly lower than that required by the traditional signature followed by encryption.This topic has similar mathematical content to the lecture on RSA.It requires a good understanding of the various encryption algorithms like RSA,DES algorithms.

*****************************************************************************************************

5)Genetic programming

      Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992).

    Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language.

    GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions.

    Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms.

   Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs.

   Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed.

**************************************************************************************************************

7)Gecko

Gecko is the open source, free software web browser layout engine used in all Mozilla-branded software and its derivatives, including later Netscape releases. Written in C++, Gecko is designed to support open Internet standards. Originally created by Netscape Communications Corporation, its development is now overseen by the Mozilla Foundation.

Gecko offers a rich programming API that makes it suitable for a wide variety of roles in Internet enabled applications, such as web browsers, content presentation and client/server [1]. Primarily it is used by Mozilla browser derivatives, such as Mozilla Firefox, K-Meleon and Netscape, to render websites and the browser user interface (written in XUL), but it is used elsewhere as well. Gecko is cross-platform and works on a number of different operating systems, including Microsoft Windows, BSD s, GNU/Linux and Mac OS X.

Gecko is generally considered to be the second most-popular layout engine on the Web, after Trident (used by Internet Explorer for Windows since version 4), and followed by KHTML (used by Konqueror), WebCore (used by Safari), Presto (used by Opera) and Tasman (used by Internet Explorer for Mac).

******************************************************************************************************************

8)Symfony

Symfony is a web application framework for PHP5 projects.

It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure.

The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don t come at the cost of an increase of hosting costs.

Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files.

Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise s development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project.

Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license.

Symfony is sponsored by Sensio, a French Web Agency.

************************************************************************************************************

8)Cake PHP

CakePHP is a web application framework written in PHP, modeled after the concepts of Ruby on Rails.

Features:

    *
      Model, View, Controller Architecture
    *
      View Helpers for AJAX, Javascript, HTML Forms and more
    *
      Built-in Validation
    *
      Application Scaffolding
    *
      Application and CRUD code generation via Bake
    *
      Access Control Lists
    *
      Data Sanitization
    *
      Security, Session, and Request Handling Components
    *
      Flexible View Caching 

Active, Friendly Community - Just join our IRC channel to see who s in. We d love to help you get started.

   1.
      Flexible License - Cake is distributed under the MIT License
   2.
      Clean IP - Every line of code was written by the CakePHP development team
   3.
      Extremely Simple - Just look at the name...It s Cake
   4.
      Rapid Development - Build apps faster than ever before
   5.
      Best Practices - Cake is easy to understand and sets the industry standard in security authentication, and session handling, among other features.
   6.
      OO - Whether you are a seasoned object-oriented programmer or a beginner, you ll feel comfortable
   7.
      No Configuration - Set-up the database and watch the magic begin 

****************************************************************************************************************

9)OWL

OWL is an acronym for Web Ontology Language, a markup language for publishing and sharing data using ontologies on the Internet. OWL is a vocabulary extension of the Resource Description Framework (RDF) and is derived from the DAML+OIL Web Ontology Language (see also DAML and OIL). Together with RDF and other components, these tools make up the Semantic Web project.

OWL represents the meanings of terms in vocabularies and the relationships between those terms in a way that is suitable for processing by software.

The OWL specification is maintained by the World Wide Web Consortium (W3C).

OWL currently has three flavors: OWL Lite, OWL DL, and OWL Full. These flavors incorporate different features, and in general it is easier to reason about OWL Lite than OWL DL and OWL DL than OWL Full. OWL Lite and OWL DL are constructed in such a way that every statement can be decided in finite time; OWL Full can contain endless loops .

**********************************************************************************************************************

10)SIP

Session Initiation Protocol
Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality.
SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it can be used in any application where session initiation is a requirement. These include, Event Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-related RFCs that define behavior for such applications. All voice/video communications are done over RTP.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based communications that can support a superset of the call processing functions and features present in the public switched telephone network (PSTN).
SIP enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a highly centralized protocol, characterized by highly complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol.

SIP network elements

Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use SIP and RTP for communication, are commercially available from several vendors. Some of these can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP addresses using DNS, so calls to other SIP users can bypass the telephone network, even though your service provider might normally act as a gateway to the PSTN network for traditional phone numbers (and charge you for it).

SIP makes use of elements called proxy servers to help route requests to the user s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users.
SIP also provides a registration function that allows users to upload their current locations for use by proxy servers.
Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER is given the special name registrar.
It is an important concept that the distinction between types of SIP servers is logical, not physical.

**************************************************************************************************************

11)SAFER

In cryptography, SAFER (Secure And Fast Encryption Routine) is the name of a family of block ciphers designed primarily by James Massey (one of the designers of IDEA) on behalf of Cylink Corporation. The early SAFER K and SAFER SK function, but differ in the number of rounds and the designs share the same encryptionkey schedule. More recent versions âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬  SAFER+ and SAFER++  were submitted as candidates to the AES process and the NESSIE project respectively. All of the algorithms in the SAFER family are unpatented and available for unrestricted use.

The first SAFER cipher was SAFER K-64, published by Massey in 1993, with a 64-bit block size. The K-64 denotes a key size of 64 bits. There was some demand for a version with a larger 128-bit key, and the following year Massey published such a variant incorporating new key schedule designed by the Singapore Ministry for Home affairs: SAFER K-128. However, both Lars Knudsen and Sean Murphy found minor weaknesses in this version, prompting a redesign of the key schedule to one suggested by Knudsen; these variants were named SAFER SK-64 and SAFER SK-128 respectively âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬  the SK standing for Strengthened Key schedule , though the RSA FAQ reports that, one joke has it that SK really stands for Stop Knudsen , a wise precaution in the design of any block cipher . Another variant with a reduced key size was published, SAFER SK-40, to comply with 40-bit export restrictions.

All of these ciphers use the same round function consisting of four stages, as shown in the diagram: a key-mixing stage, a substitution layer, another key-mixing stage, and finally a diffusion layer. In the first key-mixing stage, the plaintext block is divided into eight 8-bit segments, and subkeys are added using either addition modulo 256 (denoted by a + in a square) or XOR (denoted by a + in a circle). The substitution layer consists of two S-boxes, each the inverse of each other, derived from discrete exponentiation (45x) and logarithm (log45x) functions. After a second key-mixing stage there is the diffusion layer: a novel cryptographic component termed a pseudo-Hadamard transform (PHT). (The PHT was also later used in the Twofish cipher.)

*********************************************************************************************************************

12)Holographic Memory

                                 Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

                                CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today's storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times. 

                                Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System.  Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data.  But holographic data storage systems use the volume to store data.  It has more advantages than conventional storage systems.  It is based on the principle of holography.  

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold.

****************************************************************************************************************

13)Clockless Chips

                       Clock speeds are now in the gigahertz range and there is not much room for speedup before physical realities start to complicate things. With gigahertz clock powering a chip, signals barely have enough time to make it across the chip before the next clock tick. At this point, speeding up the clock frequency could become disastrous.

This is where a chip that is not constricted by clock comes in to action.

                       Clockless approach, which uses a technique known as asynchronous logic, differs from conventional computer circuit design in that the switching on and off of digital circuits is controlled individually by specific pieces of data rather than by a tyrannical clock that forces all of the millions  of the circuits on a chip to march in unison.

                        A major hindrance to the development of the clockless chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a Clockless chip while keeping the cost reasonable. Another problem is that there arenâââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â€š¬Ã…¾Ãƒâ€šÃ‚¢t much tools used to develop asynchronous chips. Until this is possible, Clockless chips will not be a major player in the market.

                        In this seminar the topics covered are âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬Ãƒ…“ general concept of asynchronous circuits, their design issues and types of design. The major designs discussed are Bounded delay method, Delay insensitive method & the Null Conventional Logic (NCL).

The seminar also does a comparison of synchronous and asynchronous circuits and the applications in which asynchronous circuits are used.

*************************************************************************************************************************************8

14)
Artificial Intelligence for Speech Recognition

               Artificial Intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (computers, robots, etc). AI is the behavior of a machine, which, if performed by a human being, would be called intelligent. It makes machines smarter and more useful, is less expensive than natural intelligence. Natural Language Processing (NLP) refers to Artificial Intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action.

               The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with computer in oneâââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â€š¬Ã…¾Ãƒâ€šÃ‚¢s language. One of the main benefits of speech recognition system is that it lets user do other works simultaneously

*********************************************************************************************************************
15)Cryptovirology

Cryptovirology is a field that studies how to use cryptography to design powerful malicious software. It encompasses overt attacks such as cryptoviral extortion where a cryptovirus, cryptoworm, or cryptotrojan hybrid encrypts the victim's files and the user must pay the malware author to receive the needed session key (that is encrypted under the author's public key that is contained in the malware). The field also encompasses covert attacks in which the attacker secretly steals private information such as private keys. An example of the latter type of attack are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can only be used by the attacker even after it is found. There are many other attacks in the field that are not mentioned here.

*******************************************************************************************************************88

16)An Introduction to Artifical Life

Artifical Life also known as alife or a-life, is the study of life through the use of human-made analogs of living systems. Computer scientist Christopher Langton coined the term in the late 1980s when he held the first 'International Conference on the Synthesis and Simulation of Living Systems' (otherwise known as Artificial Life I) at the Los Alamos National Laboratory in 1987.

The focus of this seminar is Artificial Life in software. Topics which will be covered include: what Artificial Life (ALife) is and is about;

open research problems in Alife;

 presuppositions underlying Alife in software;

basic requirements of an Alife software system and some guidelines for designing Alife in software. A few Alife software systems will also be introduced to help concretize the concepts

**************************************************************************************************************8

17)SemaCode

Semacode is a private company and also this company s trade name for machine-readable ISO/IEC 16022 Data Matrix symbols which encode internet Uniform Resource Locators (URLs). It is primarily aimed at being used with cellular phones which have built-in cameras. The Data Matrix specification is given by the ISO/IEC 16022 standard.

Using Semacode SDK software, a URL can be converted into a type of barcode resembling a crossword puzzle, which is called a tag . Tags can be quickly captured with a mobile phone s camera and decoded to obtain a Web site address. This address can then be accessed via the phone s web browser.

**********************************************************************************************************************88

18)Epistemology

Epistemology or theory of knowledge is the branch of philosophy that studies the nature and scope of knowledge. The term epistemology is based on the Greek words episteme (meaning knowledge) and logos (meaning account/explanation); it is thought to have been coined by the Scottish philosopher James Frederick Ferrier.

Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, and skepticism about different knowledge claims. In other words, epistemology addresses the questions, What is knowledge? How is knowledge acquired? and, What do people know? Although approaches to answering any one of these questions frequently involve theories connected to others (i.e. some theories of what knowledge is being influenced by broad views as to what people know, with restrictive definitions of knowledge thereby dismissed), there is enough particularized to each that they may be treated of separately.

There are many different topics, stances, and arguments in the field of epistemology. Recent studies have dramatically challenged centuries-old assumptions, and it therefore continues to be vibrant and dynamic.

************************************************************************************************************

19)Boids

Boids, developed by Craig Reynolds in 1986, is an artificial life program, simulating the flocking behaviour of birds.

As with most artificial life simulations, Boids is an example of emergent behaviour; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:

    *
      separation: steer to avoid crowding local flockmates
    *
      alignment: steer towards the average heading of local flockmates
    *
      cohesion: steer to move toward the average position of local flockmates

More complex rules can be added, such as obstacle avoidance and goal seeking.

The movement of Boids can either be characterized as chaotic (splitting groups and wild behaviour) or orderly. Unexpected behaviours, such as splitting flocks and reuniting after avoiding obstacles, can be considered emergent.

The boids framework is often used in computer graphics, providing realistic-looking representations of flocks of birds and other creatures, such as schools of fish or herds of animals.

Boids work in a manner similar to cellular automata, since each boid acts autonomously and references a neighbourhood, as do cellular automata.

***************************************************************************************************************8

19)Sun Spot

Sun Spot (Sun Small Programmable Object Technology) is a wireless sensor network (WSN) mote developed by Sun Microsystems. The device is built upon the IEEE 802.15.4 standard. Unlike other available mote systems, the Sun SPOT is built on the Java 2 Micro Edition Virtual Machine (JVM).

Hardware
The completely assembled device should be able to fit in the palm of your hand.
Processing

180 MHz 32 bit ARM920T core - 512K RAM - 4M Flash
2.4 GHz IEEE 802.15.4 radio with integrated antenna
USB interface
Sensor Board
2G/6G 3-axis accelerometer
Temperature sensor
Light sensor
8 tri-color LEDs
6 analog inputs
2 momentary switches
5 general purpose I/O pins and 4 high current output pins 

Networking
The motes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. This implementation of 802.15.4 is not ZigBee-compliant.
Software
The device s use of Java device drivers is particularly remarkable as Java is known for its ability to be hardware-independent. Sun SPOT uses a small J2ME  which runs directly on the processor without an OS.

 ******************************************************************************************************************************88

20)WarDriving

Wardriving is searching for Wi-Fi wireless networks by moving vehicle. It involves using a car or truck and a Wi-Fi-equipped computer, such as a laptop or a PDA, to detect the networks. It was also known (as of 2002) as WiLDing (Wireless Lan Driving, although this term never gained any popularity and is no longer used), originating in the San Francisco Bay Area with the Bay Area Wireless Users Group (BAWUG). It is similar to using a scanner for radio.

Many wardrivers use GPS devices to measure the location of the network find and log it on a website (the most popular is WiGLE). For better range, antennas are built or bought, and vary from omnidirectional to highly directional. Software for wardriving is freely available on the Internet, notably, NetStumbler for Windows, Kismet for Linux, and KisMac for Macintosh.

Wardriving was named after wardialing (popularized in the Matthew Broderick movie WarGames) because it also involves searching for computer systems with software that would use a phone modem to dial numbers sequentially and see which ones were connected to a fax machine or computer, or similar device.

***********************************************************************************************************************88

21)Honey pot

    A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource.

    A honeypot is a closely monitored network decoy serving several purposes: it can distract adversaries from more valuable machines on a network, can provide early warning about new attack and exploitation trends, or allow in-depth examination of adversaries during and after exploitation of a honeypot. Deploying a physical honeypot is often time intensive and expensive as different operating systems require specialized hardware and every honeypot requires its own physical system.

    Honeypots are a powerful, new technology with incredible potential. They can do everything from detecting new attacks never seen in the wild before, to tracking automated credit card fraud and identity theft. In the past several years the technology is rapidly developing, with new concepts such as honeypot farms, commercial and open source solutions, and documented findings released.

    A great deal of research has been focused on identifying, capturing, and researching external threats. While malicious and dangerous, these attacks are often random with attackers more interested in how many systems they can break into then which systems they break into. To date, limited research has been done on how honeypots can apply to a far more dangerous and devastating threat, the advanced insider. This trusted individual knows networks and organization. Often, these individuals are not after computers, but specific information. This is a risk that has proven far more dangerous, and far more difficult to mitigate.

***************************************************************************************************************

22)Swarm Intelligence

(SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. The expression swarm intelligence was introduced by Beni & Wang in 1989, in the context of cellular robotic systems.

SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralized control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behavior. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacteria molding and fish schooling.

Application of swarm principles to large numbers of robots is called as swarm robotics.

********************88888888*****************************************************8**************************8

23)QR Code

A QR Code is a matrix code (or two-dimensional bar code) created by Japanese corporation Denso-Wave in 1994. The QR is derived from Quick Response , as the creator intended the code to allow its contents to be decoded at high speed. QR Codes are most common in Japan, and are currently the most popular type of two dimensional code in Japan.

Although initially used for tracking parts in vehicle manufacturing, QR Codes are now used for inventory management in a wide variety of industries. More recently, the inclusion of QR Code reading software on camera phones in Japan has led to a wide variety of new, consumer-oriented applications, aimed at relieving the user of the tedious task of entering data into their mobile phone. QR Codes storing addresses and URLs are becoming increasingly common in magazines and advertisements in Japan. The addition of QR Codes on business cards is also becoming common, greatly simplifying the task of entering the personal details of a new acquaintance into the address book of one s mobile phone.

********************************************************************************************************************************************************88

24)Mesotechnology

Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.

describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle (    Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.

****************************************************************************************************************************

25)Bio-inspired computing

Bio-inspired computing is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. It is often closely related to the field of artificial intelligence, as many of its pursuits can be linked to machine learning. It relies heavily on the fields of biology, computer science and mathematics. Briefly put, it is the use of computers to model nature, and simultaneously the study of nature to improve the usage of computers. Biologically-inspired computing is a major subset of natural computation.

One way in which bio-inspired computing differs from artificial intelligence (AI) is in how it takes a more evolutionary approach to learning, as opposed to the what could be described as creationist methods used in traditional AI. In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence. Bio-inspired computing, on the other hand, takes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple rules, a set of simple organisms which adhere to those rules, and a method of iteratively applying those rules. After several generations of rule application it is usually the case that some forms of complex behaviour arise.

***************************************************************************************************************************8

26)Anomaly Detection

      Network intrusion detection systems often rely on matching patterns that are gleaned from known attacks. While this method is reliable and rarely produces false alarms, it has the obvious disadvantage that it cannot detect novel attacks. An alternative approach is to learn a model of normal traffic and report deviations, but these anomaly models are typically restricted to modeling IP addresses and ports, and do not include the application payload where many attacks occur. We describe a novel approach to anomaly detection.

We extract a set of attributes from each event (IP packet or TCP connection),including strings in the payload, and induce a set of conditional rules which have a very low probability of being violated in a nonstationary model of the normal network traffic in the training data. In the 1999 DARPA intrusion detection evaluation data set, we detect about 60% of 190 attacks at a false alarm rate of 10 per day (100 total). We believe that anomaly detection can work because most attacks exploit software or configuration errors that escaped field testing, so are only exposed under unusual consitions.

Though our rule learning techniques are applied to network intrusion detection, they are general enough for detecting anomalies in other applications.

**************************************************************************************************************

27)Automated Authentication of Identity Documents

Identity documents (IDs), such as passports and drivers' licenses are relied upon to deter fraud and stop terrorism. A multitude of document types and increased expertise in forgery make human inspection of such documents inconsistent and error prone. New generation reader/authenticator technology can assist in the ID screening process. Such devices can read the information on the ID, authenticate it, and provide an overall security risk analysis. This talk will discuss how image processing and pattern recognition technology were used in the implementation of one such commercial device, the AssureTec i-Dentify reader. The reader is based on a high resolution color CCD camera which automatically captures a presented ID under a variety of light sources (Visible, UV, IR, and others) in a few seconds.

Automated processing of IDs involves a number of interesting technical challenges which will be discussed: sensing the presence of a document in the reader viewing area; cropping the document and extracting its size; identifying the document type by rapid comparison to a known document library; locating, extracting, and image processing of data fields of various types (text, photo, symbols, barcodes); processing text fields with appropriate OCR engines; cross-checking data from different parts of a document for consistence; checking for the presence of security features (e.g., UV patterns); and providing an overall risk assessment that the document is falsified.

A live demonstration of the AssureTec i-Dentify reader will be given. 

**********************************************************************************************************************

28)Brain Computer Interface

                             Brain-Computer interface is a staple of science fiction writing.  Init's earliest  incarnations nomechanism was thought necessary, as the  technology seemed so far fetched that no explanation was likely.  As more became known about the brain however, the possibility has become more real and the science fiction more technically sophisticated.  Recently, the cyberpunk movement has adopted the idea of 'jacking in', sliding 'biosoft' chips into slots implanted in the skull (Gibson, W. 1984). 

                              Although such  biosofts are still science fiction, there have been several recent steps toward interfacing the brain and computers.  Chief among these are techniques for stimulating and recording from areas of the brain with  permanently implanted electrodes and using conscious control of EEG to control computers. 

                                        Some preliminary work is being done on synapsing neurons on silicon transformers and on growing neurons into neural networks on top of computer chips.The most advanced work in designing a brain-computer interface has stemmed   from the evolution of traditional electrodes.  There are essentially two main problems, stimulating the brain (input) and recording from the brain (output).

                                        Traditionally, both input and output were handled by electrodes pulled from metal wires and glass tubing.Using conventional electrodes, multi-unit recordings can be constructed from mutlibarrelled pipettes.  In addition to being fragile and bulky, the electrodes in these arrays are often too far apart, as most fine neural processes are only .1 to 2 Ãâہ¡ÃƒÆ’‚µm apart.

                      Pickard describes a new type   of electrode, which circumvents many of the problems listed above.  These printed circuit micro-electrodes (PCMs) are manufactured in the same manner of computer chips.  A design of a chip is photoreduced to produce an image on a photosensitive glass plate. This is used as a mask, which covers a UV sensitive glass or plastic film.

   A PCM has three essential elements:

                           1) the tissue terminals,

                           2) a circuit board controlling or reading from the terminals      

                           3) a Input/Output controller-interpreter,  such as a computer.                                     

Heading : 	

Bold 	Italic 	Underline 		Align Left 	Center 	Align Right 	Justify Full 		Horizontal Rule 		Ordered List 	Unordered List 		Outdent 	Indent
Text Color

Background Color
		Insert Link 	Add Image
Insert Table

 View Source

***************************************************************************************************************************

29)Artificial passenger

An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure that the driver stays awake. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle s audio speakers to converse with the driver.

The conversation would be based on a personalized profile of the driver. A camera could be used to evaluate the driver s facial state and a voice analyzer to evaluate whether the driver was becoming drowsy. If a driver seemed to display too much fatigue, the artificial passenger might be programmed to open all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water.

*************************************************************************************************************

30)ShotCode

ShotCode is a circular barcode created by OP3. It uses a dartboard-like circle, with a bulls eye in the centre and datacircles surrounding it. The technology reads databits from these datacircles by measuring the angle and distance from the bulls eye for each.

ShotCodes are designed to be read with a regular camera (including those found on mobile phones and webcams) without the need to purchase other specialised hardware. Because of the circular design, it is also possible for software to detect the angle from which the barcode is read. ShotCodes differ from matrix barcodes in that they do not store regular data - rather, they store an encoded URL which the reading device can connect to in order to download said data.

**********************************************************************************************************************

1) AKA AKA stands for the Authentication and Key Agreement security protocol. It is a mechanism which performs authentication and session key distribution in Universal Mobile Telecommunications System (UMTS) networks. AKA is a challenge-response based mechanism that uses symmetric cryptography. AKA is typically run in a UMTS IM Services Identity Module (ISIM), which resides on a smart card like device that also provides tamper resistant storage of shared secrets. Exponential key exchange The first publicly knownbpublic-key agreement protocol that meets the above criteria was the Diffie-Hellman exponential key exchange, in which two people jointly exponentiate a generator with random numbers, in such a way that an eavesdropper has no way of guessing what the key is.However, exponential key exchange in and of itself does not specify any prior agreement or subsequent authentication between the participants. It has thus been described as an anonymous key agreement protocol. Authentication Anonymous key exchange, like Diffie-Hellman, does not provide authentication of the parties, and is thus vulnerable to Man-in-the-middle attacks.A wide variety of cryptographic authentication schemes and protocols have been developed to provide authenticated key agreement to prevent man-in-the-middle and related attacks. These methods generally mathematically bind the agreed key to other agreed-upon data, such as: Public/private key pairs , Shared secret keys , Passwords ****************************************************************************************************************************************** 2)JIRO Technology ***************************************************************************************************** 3)Robocode Robocode is an Open Source educational game by Mathew Nelson (originally R was provided by IBM). It is designed to help people learn to program in Java and enjoy the experience. It is very easy to start – a simple robot can be written in just a few minutes – but perfecting a bot can take months or more. Competitors write software that controls a miniature tank that fights other identically-built (but differently programmed) tanks in a playing field. Robots move, shoot at each other, scan for each other, and hit the walls (or other robots) if they aren t careful. Though the idea of this game may seem simple, the actual strategy needed to win is not. Good robots have hundreds of lines in their code dedicated to strategy. Some of the more successful robots use techniques such as statistical analysis and attempts at neural networks in their designs. One can test a robot against many other competitors by downloading their bytecode, so design competition is fierce. Robocode provides a security sandbox (bots are restricted in what they can do on the machine they run on) which makes this a safe thing to do. ******************************************************************************************************* 4)Signcryption Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the functions of digital signature and public key encryption in a logically single step, and with a cost significantly lower than that required by the traditional signature and encryption approach. The main disadvantage of this approach is that, digitally signing a message and then encrypting it, consumes more machine cycles and bloats the message by introducing extended bits to it. Hence, decrypting and verifying the message at the receiver?s end, a lot of computational power is used up. Thus you can say that the cost of delivering a message using signing-then-encryption is in effect the sum of the costs of both digital signatures and public key encryption. Is it possible to send a message of arbitrary length with cost less than that required by signature-then-encryption? Signcryption is a new paradigm in public key cryptography that simultaneously fulfils both the functions of digital signature and public key encryption in a logically single step, and with a cost significantly lower than that required by the traditional signature followed by encryption.This topic has similar mathematical content to the lecture on RSA.It requires a good understanding of the various encryption algorithms like RSA,DES algorithms. ***************************************************************************************************** 5)Genetic programming Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program’s ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992). Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language. GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions. Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms. Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs. Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed. ************************************************************************************************************** 7)Gecko Gecko is the open source, free software web browser layout engine used in all Mozilla-branded software and its derivatives, including later Netscape releases. Written in C++, Gecko is designed to support open Internet standards. Originally created by Netscape Communications Corporation, its development is now overseen by the Mozilla Foundation. Gecko offers a rich programming API that makes it suitable for a wide variety of roles in Internet enabled applications, such as web browsers, content presentation and client/server [1]. Primarily it is used by Mozilla browser derivatives, such as Mozilla Firefox, K-Meleon and Netscape, to render websites and the browser user interface (written in XUL), but it is used elsewhere as well. Gecko is cross-platform and works on a number of different operating systems, including Microsoft Windows, BSD s, GNU/Linux and Mac OS X. Gecko is generally considered to be the second most-popular layout engine on the Web, after Trident (used by Internet Explorer for Windows since version 4), and followed by KHTML (used by Konqueror), WebCore (used by Safari), Presto (used by Opera) and Tasman (used by Internet Explorer for Mac). ****************************************************************************************************************** 8)Symfony Symfony is a web application framework for PHP5 projects. It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure. The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don t come at the cost of an increase of hosting costs. Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files. Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise s development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project. Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license. Symfony is sponsored by Sensio, a French Web Agency. ************************************************************************************************************ 8)Cake PHP CakePHP is a web application framework written in PHP, modeled after the concepts of Ruby on Rails. Features: * Model, View, Controller Architecture * View Helpers for AJAX, Javascript, HTML Forms and more * Built-in Validation * Application Scaffolding * Application and CRUD code generation via Bake * Access Control Lists * Data Sanitization * Security, Session, and Request Handling Components * Flexible View Caching Active, Friendly Community – Just join our IRC channel to see who s in. We d love to help you get started. 1. Flexible License – Cake is distributed under the MIT License 2. Clean IP – Every line of code was written by the CakePHP development team 3. Extremely Simple – Just look at the name…It s Cake 4. Rapid Development – Build apps faster than ever before 5. Best Practices – Cake is easy to understand and sets the industry standard in security authentication, and session handling, among other features. 6. OO – Whether you are a seasoned object-oriented programmer or a beginner, you ll feel comfortable 7. No Configuration – Set-up the database and watch the magic begin **************************************************************************************************************** 9)OWL OWL is an acronym for Web Ontology Language, a markup language for publishing and sharing data using ontologies on the Internet. OWL is a vocabulary extension of the Resource Description Framework (RDF) and is derived from the DAML+OIL Web Ontology Language (see also DAML and OIL). Together with RDF and other components, these tools make up the Semantic Web project. OWL represents the meanings of terms in vocabularies and the relationships between those terms in a way that is suitable for processing by software. The OWL specification is maintained by the World Wide Web Consortium (W3C). OWL currently has three flavors: OWL Lite, OWL DL, and OWL Full. These flavors incorporate different features, and in general it is easier to reason about OWL Lite than OWL DL and OWL DL than OWL Full. OWL Lite and OWL DL are constructed in such a way that every statement can be decided in finite time; OWL Full can contain endless loops . ********************************************************************************************************************** 10)SIP Session Initiation Protocol Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality. SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it can be used in any application where session initiation is a requirement. These include, Event Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-related RFCs that define behavior for such applications. All voice/video communications are done over RTP. A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based communications that can support a superset of the call processing functions and features present in the public switched telephone network (PSTN). SIP enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a highly centralized protocol, characterized by highly complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol. SIP network elements Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use SIP and RTP for communication, are commercially available from several vendors. Some of these can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP addresses using DNS, so calls to other SIP users can bypass the telephone network, even though your service provider might normally act as a gateway to the PSTN network for traditional phone numbers (and charge you for it). SIP makes use of elements called proxy servers to help route requests to the user s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users. SIP also provides a registration function that allows users to upload their current locations for use by proxy servers. Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER is given the special name registrar. It is an important concept that the distinction between types of SIP servers is logical, not physical. ************************************************************************************************************** 11)SAFER In cryptography, SAFER (Secure And Fast Encryption Routine) is the name of a family of block ciphers designed primarily by James Massey (one of the designers of IDEA) on behalf of Cylink Corporation. The early SAFER K and SAFER SK function, but differ in the number of rounds and the designs share the same encryptionkey schedule. More recent versions âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬ SAFER+ and SAFER++ were submitted as candidates to the AES process and the NESSIE project respectively. All of the algorithms in the SAFER family are unpatented and available for unrestricted use. The first SAFER cipher was SAFER K-64, published by Massey in 1993, with a 64-bit block size. The K-64 denotes a key size of 64 bits. There was some demand for a version with a larger 128-bit key, and the following year Massey published such a variant incorporating new key schedule designed by the Singapore Ministry for Home affairs: SAFER K-128. However, both Lars Knudsen and Sean Murphy found minor weaknesses in this version, prompting a redesign of the key schedule to one suggested by Knudsen; these variants were named SAFER SK-64 and SAFER SK-128 respectively âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬ the SK standing for Strengthened Key schedule , though the RSA FAQ reports that, one joke has it that SK really stands for Stop Knudsen , a wise precaution in the design of any block cipher . Another variant with a reduced key size was published, SAFER SK-40, to comply with 40-bit export restrictions. All of these ciphers use the same round function consisting of four stages, as shown in the diagram: a key-mixing stage, a substitution layer, another key-mixing stage, and finally a diffusion layer. In the first key-mixing stage, the plaintext block is divided into eight 8-bit segments, and subkeys are added using either addition modulo 256 (denoted by a + in a square) or XOR (denoted by a + in a circle). The substitution layer consists of two S-boxes, each the inverse of each other, derived from discrete exponentiation (45x) and logarithm (log45x) functions. After a second key-mixing stage there is the diffusion layer: a novel cryptographic component termed a pseudo-Hadamard transform (PHT). (The PHT was also later used in the Twofish cipher.) ********************************************************************************************************************* 12)Holographic Memory Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc. CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today’s storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times. Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography. Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold. **************************************************************************************************************** 13)Clockless Chips Clock speeds are now in the gigahertz range and there is not much room for speedup before physical realities start to complicate things. With gigahertz clock powering a chip, signals barely have enough time to make it across the chip before the next clock tick. At this point, speeding up the clock frequency could become disastrous. This is where a chip that is not constricted by clock comes in to action. Clockless approach, which uses a technique known as asynchronous logic, differs from conventional computer circuit design in that the switching on and off of digital circuits is controlled individually by specific pieces of data rather than by a tyrannical clock that forces all of the millions of the circuits on a chip to march in unison. A major hindrance to the development of the clockless chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a Clockless chip while keeping the cost reasonable. Another problem is that there arenâââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â€š¬Ã…¾Ãƒâ€šÃ‚¢t much tools used to develop asynchronous chips. Until this is possible, Clockless chips will not be a major player in the market. In this seminar the topics covered are âââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â‚¬Å¡Ã‚¬Ãƒ…“ general concept of asynchronous circuits, their design issues and types of design. The major designs discussed are Bounded delay method, Delay insensitive method & the Null Conventional Logic (NCL). The seminar also does a comparison of synchronous and asynchronous circuits and the applications in which asynchronous circuits are used. *************************************************************************************************************************************8 14) Artificial Intelligence for Speech Recognition Artificial Intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (computers, robots, etc). AI is the behavior of a machine, which, if performed by a human being, would be called intelligent. It makes machines smarter and more useful, is less expensive than natural intelligence. Natural Language Processing (NLP) refers to Artificial Intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action. The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with computer in oneâââہ¡Ãƒâ€šÃ‚¬ÃƒÆ’¢Ã¢Ã¢â€š¬Ã…¾Ãƒâ€šÃ‚¢s language. One of the main benefits of speech recognition system is that it lets user do other works simultaneously ********************************************************************************************************************* 15)Cryptovirology Cryptovirology is a field that studies how to use cryptography to design powerful malicious software. It encompasses overt attacks such as cryptoviral extortion where a cryptovirus, cryptoworm, or cryptotrojan hybrid encrypts the victim’s files and the user must pay the malware author to receive the needed session key (that is encrypted under the author’s public key that is contained in the malware). The field also encompasses covert attacks in which the attacker secretly steals private information such as private keys. An example of the latter type of attack are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can only be used by the attacker even after it is found. There are many other attacks in the field that are not mentioned here. *******************************************************************************************************************88 16)An Introduction to Artifical Life Artifical Life also known as alife or a-life, is the study of life through the use of human-made analogs of living systems. Computer scientist Christopher Langton coined the term in the late 1980s when he held the first ‘International Conference on the Synthesis and Simulation of Living Systems’ (otherwise known as Artificial Life I) at the Los Alamos National Laboratory in 1987. The focus of this seminar is Artificial Life in software. Topics which will be covered include: what Artificial Life (ALife) is and is about; open research problems in Alife; presuppositions underlying Alife in software; basic requirements of an Alife software system and some guidelines for designing Alife in software. A few Alife software systems will also be introduced to help concretize the concepts **************************************************************************************************************8 17)SemaCode Semacode is a private company and also this company s trade name for machine-readable ISO/IEC 16022 Data Matrix symbols which encode internet Uniform Resource Locators (URLs). It is primarily aimed at being used with cellular phones which have built-in cameras. The Data Matrix specification is given by the ISO/IEC 16022 standard. Using Semacode SDK software, a URL can be converted into a type of barcode resembling a crossword puzzle, which is called a tag . Tags can be quickly captured with a mobile phone s camera and decoded to obtain a Web site address. This address can then be accessed via the phone s web browser. **********************************************************************************************************************88 18)Epistemology Epistemology or theory of knowledge is the branch of philosophy that studies the nature and scope of knowledge. The term epistemology is based on the Greek words episteme (meaning knowledge) and logos (meaning account/explanation); it is thought to have been coined by the Scottish philosopher James Frederick Ferrier. Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, and skepticism about different knowledge claims. In other words, epistemology addresses the questions, What is knowledge? How is knowledge acquired? and, What do people know? Although approaches to answering any one of these questions frequently involve theories connected to others (i.e. some theories of what knowledge is being influenced by broad views as to what people know, with restrictive definitions of knowledge thereby dismissed), there is enough particularized to each that they may be treated of separately. There are many different topics, stances, and arguments in the field of epistemology. Recent studies have dramatically challenged centuries-old assumptions, and it therefore continues to be vibrant and dynamic. ************************************************************************************************************ 19)Boids Boids, developed by Craig Reynolds in 1986, is an artificial life program, simulating the flocking behaviour of birds. As with most artificial life simulations, Boids is an example of emergent behaviour; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows: * separation: steer to avoid crowding local flockmates * alignment: steer towards the average heading of local flockmates * cohesion: steer to move toward the average position of local flockmates More complex rules can be added, such as obstacle avoidance and goal seeking. The movement of Boids can either be characterized as chaotic (splitting groups and wild behaviour) or orderly. Unexpected behaviours, such as splitting flocks and reuniting after avoiding obstacles, can be considered emergent. The boids framework is often used in computer graphics, providing realistic-looking representations of flocks of birds and other creatures, such as schools of fish or herds of animals. Boids work in a manner similar to cellular automata, since each boid acts autonomously and references a neighbourhood, as do cellular automata. ***************************************************************************************************************8 19)Sun Spot Sun Spot (Sun Small Programmable Object Technology) is a wireless sensor network (WSN) mote developed by Sun Microsystems. The device is built upon the IEEE 802.15.4 standard. Unlike other available mote systems, the Sun SPOT is built on the Java 2 Micro Edition Virtual Machine (JVM). Hardware The completely assembled device should be able to fit in the palm of your hand. Processing 180 MHz 32 bit ARM920T core – 512K RAM – 4M Flash 2.4 GHz IEEE 802.15.4 radio with integrated antenna USB interface Sensor Board 2G/6G 3-axis accelerometer Temperature sensor Light sensor 8 tri-color LEDs 6 analog inputs 2 momentary switches 5 general purpose I/O pins and 4 high current output pins Networking The motes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. This implementation of 802.15.4 is not ZigBee-compliant. Software The device s use of Java device drivers is particularly remarkable as Java is known for its ability to be hardware-independent. Sun SPOT uses a small J2ME which runs directly on the processor without an OS. ******************************************************************************************************************************88 20)WarDriving Wardriving is searching for Wi-Fi wireless networks by moving vehicle. It involves using a car or truck and a Wi-Fi-equipped computer, such as a laptop or a PDA, to detect the networks. It was also known (as of 2002) as WiLDing (Wireless Lan Driving, although this term never gained any popularity and is no longer used), originating in the San Francisco Bay Area with the Bay Area Wireless Users Group (BAWUG). It is similar to using a scanner for radio. Many wardrivers use GPS devices to measure the location of the network find and log it on a website (the most popular is WiGLE). For better range, antennas are built or bought, and vary from omnidirectional to highly directional. Software for wardriving is freely available on the Internet, notably, NetStumbler for Windows, Kismet for Linux, and KisMac for Macintosh. Wardriving was named after wardialing (popularized in the Matthew Broderick movie WarGames) because it also involves searching for computer systems with software that would use a phone modem to dial numbers sequentially and see which ones were connected to a fax machine or computer, or similar device. ***********************************************************************************************************************88 21)Honey pot A honeypot is an information system resource whose value lies in unauthorized or illicit use of that resource. A honeypot is a closely monitored network decoy serving several purposes: it can distract adversaries from more valuable machines on a network, can provide early warning about new attack and exploitation trends, or allow in-depth examination of adversaries during and after exploitation of a honeypot. Deploying a physical honeypot is often time intensive and expensive as different operating systems require specialized hardware and every honeypot requires its own physical system. Honeypots are a powerful, new technology with incredible potential. They can do everything from detecting new attacks never seen in the wild before, to tracking automated credit card fraud and identity theft. In the past several years the technology is rapidly developing, with new concepts such as honeypot farms, commercial and open source solutions, and documented findings released. A great deal of research has been focused on identifying, capturing, and researching external threats. While malicious and dangerous, these attacks are often random with attackers more interested in how many systems they can break into then which systems they break into. To date, limited research has been done on how honeypots can apply to a far more dangerous and devastating threat, the advanced insider. This trusted individual knows networks and organization. Often, these individuals are not after computers, but specific information. This is a risk that has proven far more dangerous, and far more difficult to mitigate. *************************************************************************************************************** 22)Swarm Intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. The expression swarm intelligence was introduced by Beni & Wang in 1989, in the context of cellular robotic systems. SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralized control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behavior. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacteria molding and fish schooling. Application of swarm principles to large numbers of robots is called as swarm robotics. ********************88888888*****************************************************8**************************8 23)QR Code A QR Code is a matrix code (or two-dimensional bar code) created by Japanese corporation Denso-Wave in 1994. The QR is derived from Quick Response , as the creator intended the code to allow its contents to be decoded at high speed. QR Codes are most common in Japan, and are currently the most popular type of two dimensional code in Japan. Although initially used for tracking parts in vehicle manufacturing, QR Codes are now used for inventory management in a wide variety of industries. More recently, the inclusion of QR Code reading software on camera phones in Japan has led to a wide variety of new, consumer-oriented applications, aimed at relieving the user of the tedious task of entering data into their mobile phone. QR Codes storing addresses and URLs are becoming increasingly common in magazines and advertisements in Japan. The addition of QR Codes on business cards is also becoming common, greatly simplifying the task of entering the personal details of a new acquaintance into the address book of one s mobile phone. ********************************************************************************************************************************************************88 24)Mesotechnology Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales. describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale. **************************************************************************************************************************** 25)Bio-inspired computing Bio-inspired computing is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. It is often closely related to the field of artificial intelligence, as many of its pursuits can be linked to machine learning. It relies heavily on the fields of biology, computer science and mathematics. Briefly put, it is the use of computers to model nature, and simultaneously the study of nature to improve the usage of computers. Biologically-inspired computing is a major subset of natural computation. One way in which bio-inspired computing differs from artificial intelligence (AI) is in how it takes a more evolutionary approach to learning, as opposed to the what could be described as creationist methods used in traditional AI. In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes something and imbues it with its intelligence. Bio-inspired computing, on the other hand, takes a more bottom-up, decentralised approach; bio-inspired techniques often involve the method of specifying a set of simple rules, a set of simple organisms which adhere to those rules, and a method of iteratively applying those rules. After several generations of rule application it is usually the case that some forms of complex behaviour arise. ***************************************************************************************************************************8 26)Anomaly Detection Network intrusion detection systems often rely on matching patterns that are gleaned from known attacks. While this method is reliable and rarely produces false alarms, it has the obvious disadvantage that it cannot detect novel attacks. An alternative approach is to learn a model of normal traffic and report deviations, but these anomaly models are typically restricted to modeling IP addresses and ports, and do not include the application payload where many attacks occur. We describe a novel approach to anomaly detection. We extract a set of attributes from each event (IP packet or TCP connection),including strings in the payload, and induce a set of conditional rules which have a very low probability of being violated in a nonstationary model of the normal network traffic in the training data. In the 1999 DARPA intrusion detection evaluation data set, we detect about 60% of 190 attacks at a false alarm rate of 10 per day (100 total). We believe that anomaly detection can work because most attacks exploit software or configuration errors that escaped field testing, so are only exposed under unusual consitions. Though our rule learning techniques are applied to network intrusion detection, they are general enough for detecting anomalies in other applications. ************************************************************************************************************** 27)Automated Authentication of Identity Documents Identity documents (IDs), such as passports and drivers’ licenses are relied upon to deter fraud and stop terrorism. A multitude of document types and increased expertise in forgery make human inspection of such documents inconsistent and error prone. New generation reader/authenticator technology can assist in the ID screening process. Such devices can read the information on the ID, authenticate it, and provide an overall security risk analysis. This talk will discuss how image processing and pattern recognition technology were used in the implementation of one such commercial device, the AssureTec i-Dentify reader. The reader is based on a high resolution color CCD camera which automatically captures a presented ID under a variety of light sources (Visible, UV, IR, and others) in a few seconds. Automated processing of IDs involves a number of interesting technical challenges which will be discussed: sensing the presence of a document in the reader viewing area; cropping the document and extracting its size; identifying the document type by rapid comparison to a known document library; locating, extracting, and image processing of data fields of various types (text, photo, symbols, barcodes); processing text fields with appropriate OCR engines; cross-checking data from different parts of a document for consistence; checking for the presence of security features (e.g., UV patterns); and providing an overall risk assessment that the document is falsified. A live demonstration of the AssureTec i-Dentify reader will be given. ********************************************************************************************************************** 28)Brain Computer Interface Brain-Computer interface is a staple of science fiction writing. Init’s earliest incarnations nomechanism was thought necessary, as the technology seemed so far fetched that no explanation was likely. As more became known about the brain however, the possibility has become more real and the science fiction more technically sophisticated. Recently, the cyberpunk movement has adopted the idea of ‘jacking in’, sliding ‘biosoft’ chips into slots implanted in the skull (Gibson, W. 1984). Although such biosofts are still science fiction, there have been several recent steps toward interfacing the brain and computers. Chief among these are techniques for stimulating and recording from areas of the brain with permanently implanted electrodes and using conscious control of EEG to control computers. Some preliminary work is being done on synapsing neurons on silicon transformers and on growing neurons into neural networks on top of computer chips.The most advanced work in designing a brain-computer interface has stemmed from the evolution of traditional electrodes. There are essentially two main problems, stimulating the brain (input) and recording from the brain (output). Traditionally, both input and output were handled by electrodes pulled from metal wires and glass tubing.Using conventional electrodes, multi-unit recordings can be constructed from mutlibarrelled pipettes. In addition to being fragile and bulky, the electrodes in these arrays are often too far apart, as most fine neural processes are only .1 to 2 Ãâہ¡ÃƒÆ’‚µm apart. Pickard describes a new type of electrode, which circumvents many of the problems listed above. These printed circuit micro-electrodes (PCMs) are manufactured in the same manner of computer chips. A design of a chip is photoreduced to produce an image on a photosensitive glass plate. This is used as a mask, which covers a UV sensitive glass or plastic film. A PCM has three essential elements: 1) the tissue terminals, 2) a circuit board controlling or reading from the terminals 3) a Input/Output controller-interpreter, such as a computer. Heading : Bold Italic Underline Align Left Center Align Right Justify Full Horizontal Rule Ordered List Unordered List Outdent Indent Text Color Background Color Insert Link Add Image Insert Table View Source *************************************************************************************************************************** 29)Artificial passenger An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure that the driver stays awake. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle s audio speakers to converse with the driver. The conversation would be based on a personalized profile of the driver. A camera could be used to evaluate the driver s facial state and a voice analyzer to evaluate whether the driver was becoming drowsy. If a driver seemed to display too much fatigue, the artificial passenger might be programmed to open all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water. ************************************************************************************************************* 30)ShotCode ShotCode is a circular barcode created by OP3. It uses a dartboard-like circle, with a bulls eye in the centre and datacircles surrounding it. The technology reads databits from these datacircles by measuring the angle and distance from the bulls eye for each. ShotCodes are designed to be read with a regular camera (including those found on mobile phones and webcams) without the need to purchase other specialised hardware. Because of the circular design, it is also possible for software to detect the angle from which the barcode is read. ShotCodes differ from matrix barcodes in that they do not store regular data – rather, they store an encoded URL which the reading device can connect to in order to download said data. **********************************************************************************************************************

Leave a Reply

You must be logged in to post a comment.