Waana IT Hero in your office...... come on pick up ur question.......

Started by dhilipkumar, Sep 13, 2008, 11:53 AM

Previous topic - Next topic

dhilipkumar

What is Green computer ?

             "The computer must be designed to use only non-toxic materials, to be energy efficient, and to have minimal impact on the environment in every stage of its life cycle." Elaborating on the last point, she adds: "A recycling strategy should be considered part of the computer's design, e.g., the computer should be designed for easy disassembly and disposal or made so that it is safe to discard."

The next environmental trend is "green IT". Computers may not produce plumes of smoke but they do consume surprisingly large amounts of electricity.

The computer industry is responsible for almost 2% of global carbon emissions, according to technology research group Gartner, mainly from PCs, servers and cooling systems.

            Google has more than 500000 servers in 40 data centres. It has never revealed how much greenhouse gas it generates, but the global Internet company says it now offsets this through a mixture of renewable energy, increased energy efficiency and other projects.

             Google is not alone in switching to green IT. SA companies are close behind in finding more energy-efficient ways of minimising the effect of technology usage on the environment.

dhilipkumar

what is Blu-ray ?


              Blu-ray is an optical disc format designed to display high definition video and store large amounts of data.

Blu-ray is the successor to DVD. The standard was developed collaboratively by Hitachi, LG, Matsushita (Panasonic), Pioneer, Philips, Samsung, Sharp, Sony, and Thomson. It became the default optical disk standard for HD content and optical data storage after winning a format war with HD-DVD, the format promoted by Toshiba and NEC.

The format's name comes from the fact that a blue laser reads from and writes to the disc rather than the red laser of DVD players. The blue laser has a 405 nanometer (nm) wavelength that can focus more tightly than the red lasers used for writable DVD. As a consequence, a Blu-ray disc can store much more data in the same 12 centimeter space. Like the rewritable DVD formats, Blu-ray uses phase change technology to enable repeated writing to the disc.

Blu-ray's standard storage capacity is enough to store a continuous backup copy of most people's hard drives on a single disc. Initially, the format had a 27 gigabyte (GB) single-sided capacity and 50 GB on dual-layer discs. Single-sided Blu-ray discs can store up to 13 hours of standard video data, compared to single-sided DVD's 133 minutes. In July 2008, Pioneer announced that they had found a way to increase capacity to 500 GB by creating 20-layer discs. These discs are not, however, expected to be commercially available in the near future.

                                               

balaganesh

Play Winners Game @ IT Acumens and get your mobile topup
http://winners.itacumens.com
- Bala Ganesh

dhilipkumar

   What is RAID?

        RAID (redundant array of independent disks; originally redundant array of inexpensive disks) is a way of storing the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in a balanced way, improving performance. Since multiple disks increases the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.

A RAID appears to the operating system to be a single logical hard disk. RAID employs the technique of disk striping, which involves partitioning each drive's storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.

In a single-user system where large records, such as medical or other scientific images, are stored, the stripes are typically set up to be small (perhaps 512 bytes) so that a single record spans all disks and can be accessed quickly by reading all disks at the same time.

In a multi-user system, better performance requires establishing a stripe wide enough to hold the typical or maximum size record. This allows overlapped disk I/O across drives.

There are at least nine types of RAID plus a non-redundant array (RAID-0):

RAID-0: This technique has striping but no redundancy of data. It offers the best performance but no fault-tolerance.

RAID-1: This type is also known as disk mirroring and consists of at least two drives that duplicate the storage of data. There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage. RAID-1 provides the best performance and the best fault-tolerance in a multi-user system.

RAID-2: This type uses striping across disks with some disks storing error checking and correcting (ECC) information. It has no advantage over RAID-3.

RAID-3: This type uses striping and dedicates one drive to storing parity information. The embedded error checking (ECC) information is used to detect errors. Data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an I/O operation addresses all drives at the same time, RAID-3 cannot overlap I/O. For this reason, RAID-3 is best for single-user systems with long record applications.

RAID-4: This type uses large stripes, which means you can read records from any single drive. This allows you to take advantage of overlapped I/O for read operations. Since all write operations have to update the parity drive, no I/O overlapping is possible. RAID-4 offers no advantage over RAID-5.

RAID-5: This type includes a rotating parity array, thus addressing the write limitation in RAID-4. Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not redundant data (but parity information can be used to reconstruct data). RAID-5 requires at least three and usually five disks for the array. It's best for multi-user systems in which performance is not critical or which do few write operations.

RAID-6: This type is similar to RAID-5 but includes a second parity scheme that is distributed across different drives and thus offers extremely high fault- and drive-failure tolerance.

RAID-7: This type includes a real-time embedded operating system as a controller, caching via a high-speed bus, and other characteristics of a stand-alone computer. One vendor offers this system.

RAID-10: Combining RAID-0 and RAID-1 is often referred to as RAID-10, which offers higher performance than RAID-1 but at much higher cost. There are two subtypes: In RAID-0+1, data is organized as stripes across multiple disks, and then the striped disk sets are mirrored. In RAID-1+0, the data is mirrored and the mirrors are striped.

dhilipkumar

What is VHDL?

                 VHDL (VHSIC Hardware Description Language) is a computer programming language designed to illustrate the behavior of field-programmable gate arrays and application-specific integrated circuits of digital systems in electronic design. VHDL describes the performance of electronic components in many areas such as; microprocessors, custom chips and simple logic gates. VHDL is used to describe precise aspects of electrical circuit behavior in order to create a VHDL simulation model. Incorporated with schematics, block diagrams and system-level VHDL descriptions, the VHDL simulation model can be used as the foundation for building larger circuits.

           Additionally, VHDL functions as a general-purpose programming language. VHDL has similarities to the C and C++ programming language structure. VHDL differs in that it includes features that allow simultaneous events description and provides a solid set of control and data representation features. VHDL is primarily used to detail the performance specification of a circuit in the form of a test bench. These circuit stimuli and comparable expected outputs substantiate the functionality of a circuit over the course of a period of time.

           An information technology professional interested in learning more about VHDL can participate in the informative tutorials provided in this section.

dhilipkumar

what is AJAX:

        "Asynchronous JavaScript And XML", otherwise known as AJAX, is a programming language that allows a web page to receive miniscule amounts of data from a web server without reloading the complete web page. Ajax works together with JavaScript. AJAX permits Web pages to be dynamic, interactive and behave as local applications. This combination of features is known as "rich client" applications. AJAX is similar to Dynamic HTML and allows synchronous and asynchronous access to remote services by using the XMLHTTPRequest object.

          Before AJAX technology was developed, a request from the user for the retrieval of data would cause the entire Web page to be refreshed, resulting in very slow loading time and minimal interaction between the user and the web page. With the advent and success of AJAX, websites utilize the "remote scripting" of AJAX to better the functionality and waiting time of interactive and dynamic websites. The server exchanges small amounts of data without the user's knowledge, increasing the speed, usability and navigation of web pages.

It is necessary for any IT professional programmer or developer to be familiar with the Ajax programming language. Online Training explaining the uses of Ajax, critique of Ajax, Challenges, Framework, basic overview of Web Services, XML, security aspects and the power of Ajax are discussed through tutorials in this section.

dhilipkumar

What is N-Tier?

N-Tier applications are useful, in that they are able to readily implement Distributed Application Design and architecture concepts. These types of applications also provide strategic benefits to solutions at the enterprise level. It is true that two tier, client server applications may seem deceptively simple from the outset – they are easy to implement and easy to use for Rapid Prototyping. At the same time, these applications can be quite a pain to maintain and secure over time.

N-Tier applications, on the other hand, are advantageous, particularly in the business environment, for a number of reasons.

N-Tier applications typically come loaded with the following components:

Security. N-Tier applications come with logging mechanisms, monitoring devices, as well as Appropriate Authentication, ensuring that the device and system is always secure.

Availability + Scalability. N-Tier applications tend to be more reliable. They come loaded with fail over mechanisms like fail over clusters to ensure redundancy.

Manageability. N-Tier applications are designed with the following capabilities in mind: deployment, monitoring, and troubleshooting. N-Tier devices ensure that one has the sufficient tools at one's disposal in order to handle any errors that may occur, log those errors, and provide guidance towards correcting those errors.

Maintenance. Maintenance in N-Tier applications is easy, as the applications adopt coding and deployment standards, as well as data abstraction, modular application design, and frameworks that enable reliable maintenance strategies.

Data abstraction. N-Tier applications make it so that one can easily adjust the functionality without altering other applications.

dhilipkumar

Data Modeling

      Information systems and computer sciences use data modeling to manage and organize large quantities of structured and unstructured data. A data model describes the information to be stored in vast data management systems like relational databases. Data models do not include unstructured data such as email messages, word processing documents, digital video, audio or image files. Furthermore, data modeling establishes implicit and explicit constrains and limitations of the structured data. A data model is formally known as data model theory.

A professional Information Technologies engineer may work as a Data Modeling Analyst for businesses using enterprise data management tools and technologies. A Data Modeling Analyst or Data Model Manager will be familiar with process modeling, understanding data modeling concepts, data modeling tools, entity relationship diagramming, dimensional data modeling and physical or logical data modeling.

Data Modeling Analysts use data modeling functions to supply an accurate representation of the enterprise. Secondly, data modeling is used to accurately reflect the data of the organization. Based on this information, a database is created.

Tutorials on Data Management in Data Modeling explain the basic concepts behind data modeling, its uses with enterprise management, terminology related to data modeling, the history of data modeling and instructions on various models within data modeling.


dhilipkumar


What is 3x Protocol

3x is the second revision of the Evolution-Data Optimized (EV-DO) set of mobile telecommunications standards made by the Third Generation Partnership Project 2 (3GPP2). It is also known as TIA-856 Rev B or EV-DO Rev B, antecedent to EV-DO Rev 0 and EV-DO Rev A.

It is one of several types of the CDMA2000 set of standards, its full designation being CDMA2000 3xRTT, where "3xRTT" stands for "three times Radio Transmission Technology" (three times because it has three carriers, in comparison to 1xRTT which has only one carrier).

The 3x protocol, like all other EV-DO standards, is a form of 3G technology. In order to maximize its rate of data transmission, 3x makes use of varied multiplexing techniques like frequency division duplex (FDD) and code division multiple access (CDMA). To achieve higher rates of data transmission, 3x may make use of two 3.75 MHz channels, or multiple 1.25 MHz channels.

Since 3x is a multi-carrier version of Rev A, it is also sometimes referred to as "Multi-Carrier" or "MC." Its multi-carrier specification gives it a number of improvements over its predecessor. It has an increased maximum downlink rate of 14.7 Mbps (4.9 Mbps per carrier for 3 carriers is expected for typical transmission).

Expected peak rates for a 3x network with two channels are 6.2 Mbps for forward link throughput and 3.6 Mbps for reverse link throughput.

For one with three channels, expected peak rates are 9.3 Mbps for forward link throughput and 5.4 Mbps for reverse link throughput.

Interference from adjacent sectors was reduced and rate was improved by hybrid frequency re-use.

Latency was reduced by using statistical multiplexing across channels, enabling new services like video telephony, web browsing, gaming and remote console sessions.

Services such as high-definition video streaming were also enabled by bundling multiple channels together.

Talk time and standby time are also increased. 3x also provided ample support for services that required different data rates for uploading and downloading, such as delivery of broadband multimedia content, transferring of files, and web browsing.

dhilipkumar

WHAT IS COM+:


        COM+ is an extension of Component Object Model (COM), Microsoft's strategic building block approach for developing application programs. COM+ is both an object-oriented programming architecture and a set of operating system services. It adds to COM a new set of system services for application components while they are running, such as notifying them of significant events or ensuring they are authorized to run. COM+ is intended to provide a model that makes it relatively easy to create business applications that work well with the Microsoft Microsoft Transaction Server (MTS) in a Windows NT or subsequent system. It is viewed as Microsoft's answer to the Sun Microsystems-IBM-Oracle approach known as Enterprise JavaBeans (EJB).


Among the services provided by COM+ are:

#     An event registry that allows components to publish the possibility of an event and other components            to subscribe to be notified when the event takes place. For example, when a sales transaction is completed, it could trigger an event that would allow other programs to be notified for subsequent processing.
#     The interception of designated system requests for the purpose of ensuring security
#     The queues of asynchronously received requests for a service

How COM+ Works Briefly

A "component" is a building block program that is self-describing. This means that it can be run with a mix of other components and each will be able to understand the capabilities and characteristics of the other components. Practically, this means that a new application can be built by reusing components already known to exist and without having to compile the application. It also makes it relatively easy to distribute different components of an application among different computers in a network. Microsoft's Distributed Component Object Model (DCOM) adds interfaces to do this.

In addition to its self-description, a component consists of one or more classes that describe objects and the methods or actions that can be performed on an object. A class (or coclass in COM+ terminology) has properties described in an interface (or cointerface). The class and its interface are language-neutral.

Associated with the class are one or more methods and fields that are implemented in a specific language such as C++ or Java or a visual programming environment. When you instantiate a class, you create an object (something real that can be executed in the computer). Sometimes the term "class" is also used for the instantiated object (which can be confusing).

Using COM, objects (or classes) and their methods and associated data are compiled into binary executable modules, that are, in fact, files with a dynamic link library (DLL) or EXE file name suffix. A module can contain more than one class.


dhilipkumar

Save Time, Money and Manpower While Eliminating Redundancies from Your Data Backups


You already know about exponential data growth, in fact you are probably experiencing it in your own environment. Digital information is doubling every 18 months. For some, that growth is even faster. Your information assets are invaluable to your company, so managing and protecting those assets is a critical priority. Data loss can impact revenue, your customers, and your reputation. In fact, some businesses can even fail due to a data loss or a prolonged period of downtime. That's why many companies are looking for solutions that protect data today and manage ever higher volumes in the future.

View this TechRepublic Webcast, co-sponsored by PC Connection and HP, featuring moderator James Hilliard and guest speaker David Fairfield, Business Continuity Product Line Manager for HP, to get expert tips and advice on defending your network against the internal security threats that are introduced every day by your own end users. These key topics and more are addressed:

-> Review customer business challenges (Explosive data growth, security, efficient utilization of resources)

-> Highlight the cost implications of experiencing lost data

-> Define what potential data protection strategy fits your customer needs

-> Provide a summary of the the benefits of Data Deduplication and how this will help your customers with more efficient storage utilization and faster access to lost data

dhilipkumar

What is 4G


        4G is the short term for fourth-generation wireless, the stage of broadband mobile communications that will supercede the third generation (3G). While neither standards bodies nor carriers have concretely defined or agreed upon what exactly 4G will be, it is expected that end-to-end IP and high-quality streaming video will be among 4G's distinguishing features. Fourth generation networks are likely to use a combination of WiMAX and WiFi.
Technologies employed by 4G may include SDR (Software-defined radio) receivers, OFDM (Orthogonal Frequency Division Multiplexing), OFDMA (Orthogonal Frequency Division Multiple Access), MIMO (multiple input/multiple output) technologies, UMTS and TD-SCDMA. All of these delivery methods are typified by high rates of data transmission and packet-switched transmision protocols. 3G technologies, by contrast, are a mix of packet and circuit-switched networks.

When fully implemented, 4G is expected to enable pervasive computing, in which simultaneous connections to multiple high-speed networks provide seamless handoffs throughout a geographical area. Network operators may employ technologies such as cognitive radio and wireless mesh networks to ensure connectivity and efficiently distribute both network traffic and spectrum.

The high speeds offered by 4G will create new markets and opportunities for both traditional and startup telecommunications companies. 4G networks, when coupled with cellular phones equipped with higher quality digital cameras and even HD capabilities, will enable vlogs to go mobile, as has already occurred with text-based moblogs. New models for collaborative citizen journalism are likely to emerge as well in areas with 4G connectivity.

A Japanese company, NTT DoCoMo, is testing 4G communication at 100 Mbps for mobile users and up to 1 Gbps while stationary. NTT DoCoMo plans on releasing their first commercial network in 2010. Other telecommunications companies, however, are moving into the area even faster. In August of 2006, Sprint Nextel announced plans to develop and deploy a 4G broadband mobile network nationwide in the United States using WiMAX. The United Kingdom's chancellor of the exchequer announced a plan to auction 4G frequencies in fall of 2006.

4G technologies are sometimes referred to by the acronym "MAGIC," which stands for Mobile multimedia, Anytime/any-where, Global mobility support, Integrated wireless and Customized personal service.

dhilipkumar

what is Bluejacking:


                 Bluejacking is the practice of sending messages between mobile users using a Bluetooth wireless connection. People using Bluetooth-enabled mobile phones and PDAs can send messages, including pictures, to any other user within a 10-meter or so range. Because such communications don't involve the carrier, they are free of charge, which may contribute to their appeal.

                 Bluetooth was created to enable wireless communications between various devices; the user can set a device to be inaccessible, accessible to a specific device (through a process known as pairing), or discoverable, that is, accessible to all devices in range. Although Bluejacking is, in and of itself, a legitimate activity, it has enabled a number of less innocent practices, including bluesnarfing attacks, which involves theft of data from a mobile device, and reportedly a fad known as toothing, the practice of sending an invitation to nearby mobile device users for the purpose of setting up a quick sexual liaison.

dhilipkumar

                          GPS (Global Positioning System) messaging is a wireless messaging system for location-specific rather than recipient-specific messages. Somewhat like electronic sticky notes, the messages are sent and received by people with GPS locators in their wireless devices; messages are linked to the location of the sender and accessed by any equipped mobile user entering that location. GPS messaging is sometimes called mid-air messaging, because that's where the messages seem to be located. Hewlett-Packard has a prototype GPS messaging system running in its lab in Bristol, England.
GPS messaging is said to have enormous potential for both emergency situations and the less urgent, albeit ongoing, concerns of users. For example, tourists could leave messages outside a restaurant -- either praising it or complaining about it -- that other tourists would pick up when they were in that location, or highway workers could warn people about upcoming traffic hazards by leaving a message that they would receive when they approached a dangerous area. A project called the Nebraska GPS-Messaging and Satellite Voice Communication Demonstration and Field Test recently explored the use of GPS messaging for law enforcement, emergency response, and highway maintenance applications.

Here's how one version works: You upload a message (or perhaps an audio clip or an image); the message is tagged with your current geographic coordinates and stored on a Web page that is linked to those specific coordinates. Then, when anyone with a capable wireless device enters that location, they can access the message, either as text or image on a screen, or as an audio message through an earpiece. When you move from one location to another, the GPS receiver in your device checks the Web site for messages linked to your location and downloads any that are there. In effect, any GPS-resolvable space could have its own Web site that users would enter geographically.


dhilipkumar

              Push to talk (PTT), is a means of instantaneous communication commonly employed in wireless cellular phone services that uses a button to switch a device from voice transmission mode to voice reception mode. The operation of phones used in this way is similar to "walkie talkie" use. PTT switches a phone from full duplex mode, where both parties can hear each other simultaneously, to half duplex mode, where only one party can speak at one time. Multiple parties to the conversation may also be included.

All major wireless carriers are rolling out versions of the service, which has been in wide use by Nextel (using the Integrated Digital Enhanced Network, or iDEN ) in the telecommunications and construction industries for years. These new versions of PTT, sometimes described as "Push To Talk over Cellular" (PoC), are based on 2.5G or 3G packet-switched networks using a form of VoIP based upon SIP and RTP protocols instead of iDEN. While current standards only allow users to talk to others within proprietary cell phone networks, future cooperation between companies and agreement on standards may allow interoperability between handsets on differing carriers.

Traditionally, a major attraction to consumers and businesses using PTT is the ability to communicate on-demand without using allotted minutes within a calling plan. This incentive may diminish as carriers adjust pricing structures to include PTT in data plans or in regular minute counts.

Early mobile telephony also used a form of PTT in the 1980s. Similar to operator-assisted landline telephone services of the early 20th century, mobile telephone users would press and hold a PTT button for several seconds to alert an operator. When the user released the button, an operator would then ask for the number the user wished to dial. The user would then transmit back and tell the operator the desired number, after which the operator would subsequently connect the wireless phone to the number desired.



dhilipkumar

what is VOD:   video on demand


         Video on demand (VoD) is an interactive TV technology that allows subscribers to view programming in real time or download programs and view them later. A VoD system at the consumer level can consist of a standard TV receiver along with a set-top box. Alternatively, the service can be delivered over the Internet to home computers, portable computers, high-end cellular telephone sets and advanced digital media devices.

VoD has historically suffered from a lack of available network bandwidth, resulting in bottlenecks and long download times. VoD can work well over a wide geographic region or on a satellite-based network as long as the demand for programming is modest. However, when large numbers of consumers demand multiple programs on a continuous basis, the total amount of data involved (in terms of megabytes) can overwhelm network resources.

             One way to mitigate this problem is to store programs on geographically distributed servers and provide programs to local users on request, a technology called store and forward. This approach increases the availability of the programming and the overall reliability of the system compared with the use of a single gigantic repository. Store and forward also allows local providers to maintain their systems and set up billing structures independently. Asynchronous transfer mode (ATM) switching technology lends itself especially well to this application.

The VoD concept is not new. The first commercial VoD service was launched in Hong Kong in the early 1990s. In the United States, Oceanic Cable of Hawaii was the first to offer it beginning in 2000, immediately after the passing of the Y2K scare. Today, VoD is offered by numerous providers, particularly those who also offer triple play services. VoD is used in educational institutions and can enhance presentations in videoconference environments. VoD is also offered in most high-end hotels. VoD will likely become more common as fiber to the home (FTTH) services become widespread.

dhilipkumar

#17
        

worm


In a computer, a worm is a self-replicating virus that does not alter files but resides in active memory and duplicates itself. Worms use parts of an operating system that are automatic and usually invisible to the user. It is common for worms to be noticed only when theiruncontrolled replication consumes system resources, slowing orhalting other tasks.

This term is not to be confused with WORM (write once, read many)

In computer storage media, WORM (write once, read many) is a data storage technology that allows information to be written to a disc a single time and prevents the drive from erasing the data. The discs are intentionally not rewritable, because they are especially intended to store data that the user does not want to erase accidentally. Because of this feature, WORM devices have long been used for the archival purposes of organizations such as government agencies or large enterprises. A type of optical media, WORM devices were developed in the late 1970s and have been adapted to a number of different media. The discs have varied in size from 5.25 to 14 inches wide, in varying formats ranging from 140MB to more than 3 GB per side of the (usually) double-sided medium. Data is written to a WORM disc with a low-powered laser that makes permanent marks on the surface.

Because of a lack of standardization, WORM discs have typically been only readable by the drive on which they were written, and hardware and software incompatibility has hampered their marketplace acceptance. Other optical media, such as CDs and DVDs that can be recorded once and read an unlimited number of times are sometimes considered WORM devices, although there is some argument over whether formats that can be written in more than one session (such as the multisession CD) qualify as such. CD-R has gradually been replacing traditional WORM devices, and it is expected that some newer technology, such as DVD-R or HD-ROM will eventually replace both WORM and CD-R devices.

dhilipkumar

        

what is Datacard



        A datacard is any removable computer component, approximately the size of a credit card, that contains data, or that contains nonvolatile memory to which data can be written and from which data can be recovered. The term is a synonym for smart card.

Some datacards are portable devices and include such components as flash memory modules and proprietary Memory Sticks. They are useful for the transfer of files among computers and for short-term data backup. Still other datacards are used with notebook computers to provide mobile wireless Internet access.

sajiv


Visual Basic:

Visual Basic is a third generation event-driven programming language. The Microsoft Corporation released Visual Basic in 1987. It was indeed the first visual development tool from Microsoft. Visual Basic was derived from BASIC and enables rapid application development of graphical user interface applications, access to databases using DAO, RDO, or ADO, and creation of ActiveX controls and objects. The language not only allows Visual Basic programmers to create simple GUI applications, but also helps them develop quite complex applications. Visual Basic allows developers to target Windows, Web, and mobile devices.

Programming in Visual Basic is a combination of visually arranging components on a form, specifying attributes and actions of those components. Since the default attributes and actions ought to be defined for the components, it is very simple to write a program without the help of a Visual Basic programmer. Forms can be created using drag and drop techniques. Visual Basic provides many interesting sets of tools to help you in building exciting and entertaining applications. It provides these tools to make your life easier, since the entire coding is already written for you. Moreover, it is a user friendly language which is very effective and efficient. A tool is used to place controls such as text boxes, buttons, etc on the form window. Default values will be provided when a control is created, but it can be changed by the Visual Basic programmer.

Visual Basic is not only a programming language, but it also has a complete graphical development environment. Visual Basic has the ability to develop programs that can be used as a front end application to a database system, and serving as the user interface which collects input from the user and displays formatted output in an attractive format. As the Visual Basic programmer works in the graphical environment, much of the program code is automatically generated by the Visual Basic program. The main object in Visual Basic is called a form. Once you create forms, you can change the properties using properties window. Finally, you can add events to your controls. Events are responses to actions performed on controls.

Using Visual Basic's tools, you can quickly translate an abstract idea into a program design which you can actually see on the screen. VB encourages you to experiment, revise, correct, and network your design until the project meets your requirements. Visual Basic Programmer use the language in different areas such as Education, Business, Accounting, Marketing and Sales. Visual Basic supports a number of common programming constructs and language elements. Once you understand the basics of the language, you can create powerful applications using Visual Basic.

Visual Basic can create executables i.e. EXE files, ActiveX controls, but it is primarily used to develop Windows applications. It is also used to interface web database systems. This generation of Visual Basic continues the tradition of giving you a faster and easier way to create .NET framework-based applications. Visual Basic also fully integrates the .NET framework and the common language runtime, which provide language interoperability, garbage collection, enhanced security, and versioning support.

:acumen

dhilipkumar

what is flexible organic light

In display technology, FOLED (flexible organic light emitting device) is an organic light emitting device (OLED) built on a flexible base material, such as clear plastic film or reflective metal foil, instead of the usual glass base. FOLED displays can be rolled up, folded, or worn as part of a wearable computer. The devices are said to be lighter, more durable, and less expensive to produce than the traditional glass-based alternatives.

FOLED's light-weight base materials significantly decrease the overall weight of a screen. This capacity makes FOLED displays especially useful for portable devices, such as laptop computers and other displays where weight is a consideration, such as large wall-mounted screens. Furthermore, a FOLED display is less prone to breakage than a glass-based display and compared to the silicon-based LCD displays used for small displays and flat-screen monitors, much less expensive to produce.

Time Magazine named Universal Display Corporation (UDC)'s rollable FOLED wireless monitor prototype one of the best 10 envionmentally-friendly technologies for 2002. UDC is working on such FOLED-based products as rollable, refreshable electronic newspapers and video screens embedded in car windshields, walls, windows, and office partitions. According to UDC VP Janice Mahon, we could see such products on the market within five years.

FOLEDs are among a number of organic LED variations in development. UDC is also developing TOLED (transparent OLED), PHOLED (phosphorescent OLED), and SOLED (stacked OLED) technologies.

dhilipkumar

        

what is batch file
:


A batch file is a text file that contains a sequence of commands for a computer operating system. It's called a batch file because it batches (bundles or packages) into a single file a set of commands that would otherwise have to be presented to the system interactively from a keyboard one at a time. A batch file is usually created for command sequences for which a user has a repeated need. Commonly needed batch files are often delivered as part of an operating system. You initiate the sequence of commands in the batch file by simply entering the name of the batch file on a command line.

In the Disk Operating System (DOS), a batch file has the file name extension ".BAT". (The best known DOS batch file is the AUTOEXEC.BAT file that initializes DOS when you start the system.) In Unix-based operating systems, a batch file is called a shell script. In IBM's mainframe VM operating systems, it's called an EXEC.

dhilipkumar

    hoax
       host file hijack
      hybrid virus
      hybrid virus/worm
      identity theft
      iJacking
      ILOVEYOU virus
      IM worm           



This are some of the viruses........

what is UTM

  Unified threat management (UTM) refers to a comprehensive security product that includes protection against multiple threats. A UTM product typically includes a firewall, antivirus software, content filtering and a spam filter in a single integrated package. The term was originally coined by IDC, a provider of market data, analytics and related services. UTM vendors include Fortinet, LokTek, Secure Computing Corporation and Symantec.

The principal advantages of UTM are simplicity, streamlined installation and use, and the ability to update all the security functions or programs concurrently. As the nature and diversity of Internet threats evolves and grows more complex, UTM products can be tailored to keep up with them all. This eliminates the need for systems administrators to maintain multiple security programs over time.

dhilipkumar

what is WBEM


Web-Based Enterprise Management

Web-Based Enterprise Management (WBEM) is a set of industry standards that an enterprise can use to manage its information operations in the distributed computing environment of the Internet. An important part of WBEM is the Common Information Model (CIM), a standard for defining device and application characteristics so that system and network administrators and management programs are able to control devices and applications from different manufacturers or sources in the same way. WBEM standards provide a Web-based approach for exchanging CIM data across different technologies and platforms. CIM data is encoded using Extensible Markup Language (XML) and usually transmitted between WBEM servers and clients using the Internet's Hypertext Transfer Protocol (HTTP).

WBEM is designed to be extensible, allowing new applications, devices, and operating systems to be specified in the future. Open-source implementations of WBEM are available from several vendors, including OpenPegasus, OpenWBEM, and WBEMsource. WBEM is said to be particularly appropriate for storage networking, grid computing, utility computing, and Web services.

WBEM was developed by the Distributed Management Task Force (DTMF), a collaboration of BMC Software, Cisco, Compaq, IBM, Intel, Microsoft, and other companies. A more limited approach to a network management standard, the Simple Network Management Protocol (SNMP)was developed earlier and is still in use.

dhilipkumar

Cosmos


             Cosmos is an evolving, open source, .NET-based operating system development tool. The acronym stands for "C# Open Source Managed Operating System."

              With Cosmos, developers can create and debug their own custom operating systems using Visual Studio .NET, Microsoft's visual programming environment for creating Web services based on use of the Extensible Markup Language (XML). Cosmos requires minimal programming experience and the full source code is publicly available. Cosmos shares some characteristics of Singularity, a Microsoft research project devoted to the development of new operating systems. Cosmos was conceived by a former employee of Microsoft but the project itself is independent from Microsoft.

dhilipkumar

what is CICS.....?


                         CICS (Customer Information Control System) is an online transaction processing (OLTP) program from IBM that, together with the COBOL programming language, has formed over the past several decades the most common set of tools for building customer transaction applications in the world of large enterprise mainframe computing. A great number of the legacy applications still in use are COBOL/CICS applications. Using the application programming interface (API) provided by CICS, a programmer can write programs that communicate with online users and read from or write to customer and other records (orders, inventory figures, customer data, and so forth) in a database (usually referred to as "data sets") using CICS facilities rather than IBM's access methods directly. Like other transaction managers, CICS can ensure that transactions are completed and, if not, undo partly completed transactions so that the integrity of data records is maintained.

               IBM markets or supports a CICS product for OS/390, Unix, and Intel PC operating systems. Some of IBM's customers use IBM's Transaction Server to handle e-business transactions from Internet users and forward these to a mainframe server that accesses an existing CICS order and inventory database.



dhilipkumar

what is ABAP  (in software)


ABAP (Advanced Business Application Programming) is a programming language for developing applications for the SAP R/3 system, a widely-installed business application subsystem. The latest version, ABAP Objects, is object-oriented programming. SAP will run applications written using ABAP/4, the earlier ABAP version, as well as applications using ABAP Objects.

Ask your questions about ABAP at ITKnowledgeExchange.com

SAP's original business model for R/3 was developed before the idea of an object-oriented model was widespread. The transition to the object-oriented model reflects an increased customer demand for it. ABAP Objects uses a single inheritance model and full support for object features such as encapsulation, polymorphism, and persistence.

Getting started with ABAP


To explore how ABAP is used in the enterprise, here are some additional resources:

ABAP for newbies: Here is a collection of resources that will help you get started with this programming language.

Which to learn first: XI, ABAP, or SD?: Confused on the whether you should learn ABAP, XI or SD? Here are some answers on the differences from a SearchSAP.com expert..

What SAP says about ABAP's future: Learn about how SAP sees ABAP's future.

Podcast: How can ABAP developers survive in a NetWeaver era?: Learn what ABAP programmers need to know to stay current in a NetWeaver world.

ABAP jobs and more: Special report: Learn how to get started in ABAP and keep your skills current.

dhilipkumar

what is .............Blue Cloud......?


        Blue Cloud is an approach to shared infrastructure developed by IBM. The goal of IBM's Blue Cloud is to provide services that automate fluctuating demands for IT resources. The set of all the connections involved is sometimes called the "cloud."
        The primary objective of the Blue Cloud project is to facilitate distributed computing within data centers, rather than performing tasks on individual machines or through remote servers. Blue Cloud makes use of virtualized Linux images and parallel workload scheduling. Blue Cloud also employs IBM's Tivoli software to provide demand-based performance by provisioning resources among end users and monitoring the condition of provisioned servers.

         The Blue Cloud concept arose as a result of IBM's research on an innovation portal called the Technology Adoption Program. Blue Cloud is supported by hundreds of engineers and developers worldwide and employs open source software, standards, technology and services. Blue Cloud offerings are expected to become publicly available in 2008 for systems with PowerPC and x86

dhilipkumar

what is .........compiler......?


                  A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or "code" that a computer's processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor . The file that is created contains what are called the source statements . The programmer then runs the appropriate language compiler, specifying the name of the file that contains the source statements.

                  When executing (running), the compiler first parses (or analyzes) all of the language statements syntactically one after the other and then, in one or more successive stages or "passes", builds the output code, making sure that statements that refer to other statements are referred to correctly in the final code. Traditionally, the output of the compilation has been called object code or sometimes an object module . (Note that the term "object" here is not related to object-oriented programming .) The object code is machine code that the processor can process or "execute" one instruction at a time.

                    More recently, the Java programming language, a language used in object-oriented programming , has introduced the possibility of compiling output (called bytecode ) that can run on any computer system platform for which a Java virtual machine or bytecode interpreter is provided to convert the bytecode into instructions that can be executed by the actual hardware processor. Using this virtual machine, the bytecode can optionally be recompiled at the execution platform by a just-in-time compiler .

                  Traditionally in some operating systems, an additional step was required after compilation - that of resolving the relative location of instructions and data when more than one object module was to be run at the same time and they cross-refered to each other's instruction sequences or data. This process was sometimes called linkage editing and the output known as a load module .

                     A compiler works with what are sometimes called 3GL and higher-level languages . An assembler works on programs written using a processor's assembler language.

dhilipkumar

what is ...........delimiter..........?



In computer programming, a delimiter is a character that identifies the beginning or the end of a character string (a contiguous sequence of characters). The delimiting character is not part of the character string. In command syntax, a space or a backslash () or a forward slash (/) is often a delimiter, depending on the rules of the command language. The program interpreting the character string knows what the delimiters are.

      Delimiters can also be used to separate the data items in a database (the columns in the database table) when transporting the database to another application. For example, a comma-separated values file(CSV file) is one in which each value in the cells of a table row is delimited by and separated from the next value by a comma. The beginning of a row is indicated by a new line character.

dhilipkumar

What is   Blue Gene:


                   Blue Gene is an experimental parallel processing supercomputer developed by IBM that employs thousands of processors, each of which demands minimal electric current. Blue Gene dissipates relatively little energy as heat in proportion to its computational power. For this reason, it is sometimes called Frost.
The Blue Gene architecture uses two processors on a chip. For example, 1024 Blue Gene nodes (2048 processors) can be configured in a single rack, demanding approximately 25 kilowatts (kW) of electricity. The fact that a large number of processors can be put into a small space translates into scalability as well as compactness. In proportion to its physical size, Blue Gene exhibits unprecedented computing speed. At the IBM facility in Rochester, New York, a Blue Gene system with 32,768 processors was ranked by the Top500 Supercomputer Sites as the fastest machine in the world.

Potential applications of Blue Gene include the simulation of complex processes and phenomena such as space flight, wildfire behavior, cloud formation, storm evolution, and the effects of human activity on the earth's climate.

dhilipkumar

Boilerplate

    In information technology, a boilerplate is a unit of writing that can be reused over and over without change. By extension, the idea is sometimes applied to reusable programming as in "boilerplate code." The term derives from steel manufacturing, where boilerplate is steel rolled into large plates for use in steam boilers. The implication is either that boilerplate writing has been time-tested and strong as "steel," or possibly that it has been rolled out into something strong enough for repeated reuse. Legal agreements, including software and hardware terms and conditions, make abundant use of boilerplates. The term is also used as an adjective as in "a boilerplate paragraph" and also as in "The entire document was boilerplate."
A boilerplate can be compared to a certain kind of template, which can be thought of as a fill-in-the-blanks boilerplate. Some typical boilerplates include: mission statements, safety warnings, commonly used installation procedures, copyright statements, and responsibility disclaimers.

In the 1890s, boilerplate was actually cast or stamped in metal ready for the printing press and distributed to newspapers around the United States. Until the 1950s, thousands of newspapers received and used this kind of boilerplate from the nation's largest supplier, the Western Newspaper Union. Some companies also sent out press releases as boilerplate so that they had to be printed as written.

dhilipkumar

what is Grid storage



         Grid storage is a general term for any approach to storing data that employs multiple self-contained storage nodes interconnected so that any node can communicate with any other node without the data having to pass through a centralized switch. Each storage node contains its own storage medium, microprocessor, indexing capability, and management layer. Typically, several nodes may share a common switch, but each node is also connected to at least one other node cluster. Several topologies have been designed and tested, including the interconnection of nodes in a hypercube configuration, similar to the way nodes are interconnected in a mesh network.

Grid storage offers at least three advantages over older storage methods. First, grid storage introduces a new level of fault-tolerance and redundancy. If one storage node fails or a pathway between two nodes is interrupted, the network can reroute access another way or to a redundant node. This reduces the need for online maintenance, which practically eliminates downtime. Secondly, the existence of multiple paths between each pair of nodes ensures that the storage grid can maintain optimum performance under conditions of fluctuating load. Thirdly, grid storage is scalable. If a new storage node is added, it can be automatically recognized by the rest of the grid. This reduces the need for expensive hardware upgrades and downtime

dhilipkumar

what is IP SAN:


              IP storage is a general term for several approaches to using the Internet Prototol (IP) in a storage area network (SAN) usually over Gigabit Ethernet. IP storage is an alternative to the Fibre Channel framework of the traditional SAN. Proponents of IP-based storage claim that it offers a number of benefits over the Fibre Channel alternative, and will promote the wide-spread adoption of SANs that was predicted when they were first introduced. Although SANs have been around since the mid-to-late 1990s, they haven't enjoyed the market acceptance that developers expected. Fibre Channel issues, including expense, complexity, and interoperability issues are frequently cited as the cause. According to proponents, IP storage provides a solution to these issues that will enable the SAN to fulfill its early promise.

          For example, taking advantage of common network hardware and technologies may make IP SANs less complicated to deploy than Fibre Channel. The hardware components are less expensive, and because the technologies are widely known and used there are few interoperability issues and training costs are lower. Furthermore, the ubiquity of TCP/IP networks makes it possible to extend or connect SANs worldwide. There are several technology alternatives currently used for IP SANs: iSCSI (Internet SCSI) replaces Fibre Channel, while alternatives such as iFCP (Internet Fibre Channel Protocol) and FCIP (Fibre Channel over IP) offer hybrid approaches that can be used to extend Fibre Channel frameworks and to migrate from them to an IP storage network.

dhilipkumar

what is CDMA One


CDMA One, also written cdmaOne, refers to the original ITU IS-95 (CDMA) wireless interface protocol that was first standardized in 1993. It is considered a second-generation (2G) mobile wireless technology.

Today, there are two versions of IS-95, called IS-95A and IS-95B. The IS-95A protocol employs a 1.25-MHz carrier, operates in radio-frequency bands at either 800 MHz or 1.9 GHz, and supports data speeds of up to 14.4 Kbps. IS-95B can support data speeds of up to 115 kbps by bundling up to eight channels.


dhilipkumar

FCoE (Fibre Channel over Ethernet)

FCoE (Fibre Channel over Ethernet) is a proposed standard designed to enable Fibre Channel communications to run directly over Ethernet. Thus, FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructures and extend the reach and capability of storage area networks (SANs). This ability allows organizations to protect and extend existing investments in their storage networks. FCoE competes with iSCSI.

Fibre Channel supports high-speed data connections between computing devices that interconnect servers with shared storage devices and between storage controllers and drives. FCoE retains the same device communications but substitutes high-speed Ethernet links (usually, 1 Gbps or faster) for Fibre Channel links between devices. FCoE works with standard Ethernet cards, cables, and switches to handle Fibre Channel traffic at the link layer, using Ethernet frames to encapsulate, route, and transport FC frames across an Ethernet network from one switch with Fibre Channel ports and devices attached to another, similarly-equipped switch.

Given recent progress toward an Ethernet 802.3ba standard that supports maximum bandwidths of 40 Gbps and 100 Gbps, it's easy to envision that these technologies will propel FCoE to higher speeds than current Fibre Channel and Ethernet links deliver (As of December 2007, current links top out between 4 and 10 Gbps.). FCoE is most likely to appear in multi-protocol switches that include both Fibre Channel and Ethernet ports, in which the transmitting switch encapsulates Fibre Channel frames inside Ethernet packets for LAN (local area network) transmission and the receiving switch reverses that process before emitting Fibre Channel frames out a Fibre channel port. Vendors that offer both FCoE and iSCSI products that perform similar functions see this technology as complementary rather than competitive.

Networking and storage companies backing FCoE include Cisco, Emulex, Brocade Communications, EMC, Intel, QLogic, and Sun Microsystems.

dhilipkumar

what is Local loop

              In telephony, a local loop is the wired connection from a telephone company's central office in a locality to its customers' telephones at homes and businesses. This connection is usually on a pair of copper wires called twisted pair. The system was originally designed for voice transmission only using analog transmission technology on a single voice channel. Today, your computer's modem makes the conversion between analog signals and digital signals. With Integrated Services Digital Network (ISDN) or Digital Subscriber Line (DSL), the local loop can carry digital signals directly and at a much higher bandwidth than they do for voice only.

dhilipkumar

What is OQPSK

Staggered quadrature phase-shift keying (SQPSK), also known as offset quadrature phase-shift keying (OQPSK), is a method of phase-shift keying (PSK) in which the signal carrier-wave phase transition is always 90 degrees or 1/4 cycle at a time. A phase shift of 90 degrees is known as phase quadrature.

In SQPSK, the data is placed alternately on two channels or streams called the I channel (for "in phase") and the Q channel ("phase quadrature"). A single phase transition can never exceed 90 degrees. This property contrasts SQPSK with conventional quadrature phase-shift keying (QPSK), in which the phase can sometimes change by 180 degrees (two 90-degree shifts in a single transition). The average magnitude of the phase transitions is smaller with SQPSK than with conventional QPSK. The result of the smaller average phase "jump" is an improved signal-to-noise ratio (SNR) and a reduced error rate.

SQPSK and QPSK are not the only methods of PSK in common use. In binary phase-shift keying (BPSK) there are two possible states for the signal phase: 0 and 180 degrees. In QPSK and SQPSK there are four possible states: 0, +90, -90 and 180 degrees. In multiple phase-shift keying (MPSK) there can be more than four possible states for the signal phase. An example is the use of eight phase states: 0, +45, -45, +90, -90, +135, -135 and 180 degrees. More than eight phase states are rarely used because that takes the signal complexity past the point of diminishing returns and the error rate actually increases.

dhilipkumar

WHAT IS encoding and decoding:

In computers, encoding is the process of putting a sequence of characters (letters, numbers, punctuation, and certain symbols) into a specialized format for efficient transmission or storage. Decoding is the opposite process -- the conversion of an encoded format back into the original sequence of characters. Encoding and decoding are used in data communications, networking, and storage. The term is especially applicable to radio (wireless) communications systems.

The code used by most computers for text files is known as ASCII (American Standard Code for Information Interchange, pronounced ASK-ee). ASCII can depict uppercase and lowercase alphabetic characters, numerals, punctuation marks, and common symbols. Other commonly-used codes include Unicode, BinHex, Uuencode, and MIME. In data communications, Manchester encoding is a special form of encoding in which the binary digits (bits) represent the transitions between high and low logic states. In radio communications, numerous encoding and decoding methods exist, some of which are used only by specialized groups of people (amateur radio operators, for example). The oldest code of all, originally employed in the landline telegraph during the 19th century, is the Morse code.

The terms encoding and decoding are often used in reference to the processes of analog-to-digital conversion and digital-to-analog conversion. In this sense, these terms can apply to any form of data, including text, images, audio, video, multimedia, computer programs, or signals in sensors, telemetry, and control systems. Encoding should not be confused with encryption, a process in which data is deliberately altered so as to conceal its content. Encryption can be done without changing the particular code that the content is in, and encoding can be done without deliberately concealing the content.

dhilipkumar

what is cache cramming

Cache cramming is a method of tricking a computer into running Java code it would not ordinarily run. The method consists of placing code in the computer's local disk cache when the computer uses Internet Explorer in certain environments.
The rogue Java code, which is a special applet (small program) known as a port scanner, is executed as a result of the computer user visiting a particular Web site designed by the cracker. When activated, the applet opens a socket connection from the cracker's computer. This can give the cracker access to data on the hard drive of the affected computer.

dhilipkumar

what is blue bomb

A "blue bomb" (also known as "WinNuke") is a technique for causing the Windows operating system of someone you're communicating with to crash or suddenly terminate. The "blue bomb" is actually an out-of-band network packet containing information that the operating system can't process. This condition causes the operating system to "crash" or terminate prematurely. The operating system can usually be restarted without any permanent damage other than possible loss of unsaved data when you crashed.

The blue bomb derives its name from the effect it sometimes causes on the display as the operating system is terminating - a white-on-blue error screen that is commonly known as blue screen of death. Blue bombs are sometimes sent by multi-player game participants who are about to lose or users of Internet Relay Chat (IRC) who are making a final comment. This is known as "nuking" someone. A commonly-used program for causing the blue bomb is WinNuke. Many Internet service providers are filtering out the packets so they don't reach users.

dhilipkumar

what is DAO

DAO (Data Access Objects) is an application program interface (API) available with Microsoft's Visual Basic that lets a programmer request access to a Microsoft Access database. DAO was Microsoft's first object-oriented interface with databases.

DAO objects encapsulate Access's Jet functions. Through Jet functions, it can also access other Structured Query Language (SQL) databases.

To conform with Microsoft's vision of a Universal Data Access (UDA) model, programmers are being encouraged to move from DAO , although still widely used, to ActiveX Data Objects (ADO) and its low-level interface with databases, OLE DB. ADO and OLE DB offer a faster interface that is also easier to program.

dhilipkumar

what is Web services:

Web services (sometimes called application services) are services (usually including some combination of programming and data, but possibly including human resources as well) that are made available from a business's Web server for Web users or other Web-connected programs. Providers of Web services are generally known as application service providers. Web services range from such major services as storage management and customer relationship management (CRM) down to much more limited services such as the furnishing of a stock quote and the checking of bids for an auction item. The accelerating creation and availability of these services is a major Web trend.

Users can access some Web services through a peer-to-peer arrangement rather than by going to a central server. Some services can communicate with other services and this exchange of procedures and data is generally enabled by a class of software known as middleware. Services previously possible only with the older standardized service known as Electronic Data Interchange (EDI) increasingly are likely to become Web services. Besides the standardization and wide availability to users and businesses of the Internet itself, Web services are also increasingly enabled by the use of the Extensible Markup Language (XML) as a means of standardizing data formats and exchanging data. XML is the foundation for the Web Services Description Language (WSDL).

As Web services proliferate, concerns include the overall demands on network bandwidth and, for any particular service, the effect on performance as demands for that service rise. A number of new products have emerged that enable software developers to create or modify existing applications that can be "published" (made known and potentially accessible) as Web services.

dwarakesh

WHAT IS ASSEMBLY LANGUAGE

An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer's processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.

Here's how it works:

    * Most computers come with a specified set of very basic instructions that correspond to the basic machine operations that the computer can perform. For example, a "Load" instruction causes the processor to move a string of bits from a location in the processor's memory to a special holding place called a register. Assuming the processor has at least eight registers, each numbered, the following instruction would move the value (string of bits of a certain length) at memory location 3000 into the holding place called register 8:

            L        8,3000

    * The programmer can write a program using a sequence of these assembler instructions.
    * This sequence of assembler instructions, known as the source code or source program, is then specified to the assembler program when that program is started.
    * The assembler program takes each program statement in the source program and generates a corresponding bit stream or pattern (a series of 0's and 1's of a given length).
    * The output of the assembler program is called the object code or object program relative to the input source program. The sequence of 0's and 1's that constitute the object program is sometimes called machine code.
    * The object program can then be run (or executed) whenever desired.

In the earliest computers, programmers actually wrote programs in machine code, but assembler languages or instruction sets were soon developed to speed up programming. Today, assembler programming is used only where very efficient control over processor operations is needed. It requires knowledge of a particular computer's instruction set, however. Historically, most programs have been written in "higher-level" languages such as COBOL, FORTRAN, PL/I, and C. These languages are easier to learn and faster to write programs with than assembler language. The program that processes the source code written in these languages is called a compiler. Like the assembler, a compiler takes higher-level language statements and reduces them to machine code.

A newer idea in program preparation and portability is the concept of a virtual machine. For example, using the Java programming language, language statements are compiled into a generic form of machine language known as bytecode that can be run by a virtual machine, a kind of theoretical machine that approximates most computer operations. The bytecode can then be sent to any computer platform that has previously downloaded or built in the Java virtual machine. The virtual machine is aware of the specific instruction lengths and other particularities of the platform and ensures that the Java bytecode can run.

dhilipkumar

what is bit

A bit (short for binary digit) is the smallest unit of data in a computer. A bit has a single binary value, either 0 or 1. Although computers usually provide instructions that can test and manipulate bits, they generally are designed to store data and execute instructions in bit multiples called bytes. In most computer systems, there are eight bits in a byte. The value of a bit is usually stored as either above or below a designated level of electrical charge in a single capacitor within a memory device.

Half a byte (four bits) is called a nibble. In some systems, the term octet is used for an eight-bit unit instead of byte. In many systems, four eight-bit bytes or octets form a 32-bit word. In such systems, instruction lengths are sometimes expressed as full-word (32 bits in length) or half-word (16 bits in length).

In telecommunication, the bit rate is the number of bits that are transmitted in a given time period, usually a second.

dhilipkumar

WHAT IS ASSP

In computers, an ASSP (application-specific standard product) is a semiconductor device integrated circuit ( IC ) product that is dedicated to a specific application market and sold to more than one user (and thus, "standard"). The ASSP is marketed to multiple customers just as a general-purpose product is, but to a smaller number of customers since it is for a specific application. Like an ASIC (application-specific integrated circuit), the ASSP is for a special application, but it is sold to any number of companies. (An ASIC is designed and built to order for a specific company.)

An ASSP generally offers the same performance characteristics and has the same die size as an ASIC. According to a Dataquest study, 17% of all semiconductor products sold in 1999 were ASSPs; 83% were general-purpose. According to Dataquest's Jim Walker, the trend is toward more application-specific products.

dhilipkumar

WHAT IS atomic storage


Atomic storage (sometimes called atomic memory) is a nanotechnology approach to computer data storage that works with bits and atoms on the individual level. Like other nanotechnologies, nano-storage deals with microscopic material. An atom is so small that there might be ten million billion in a single grain of sand; optimally, atomic storage would store a bit of data in a single atom. Current data storage methods use millions of atoms to store a bit of data. In 1959, the famous physicist Richard Feynman discussed the potential of atomic storage, explaining that every word ever written up to that point could be stored in a .10 millimeter-wide cubic space, if the words were written with atoms.

Franz Himpsel and colleagues at the University of Wisconsin-Madison created a device that uses 20 atoms to represent a bit of data on a silicone surface. The surface resembles that of a CD (compact disk) but the scale is nanometers rather than micrometers, yielding a storage density a million times higher. Himpsel and colleagues used a scanning tunneling microscope to remove single atoms, and suggest that an extra silicon atom might represent a 1, while a vacant spot represents a 0 (binary language is made up entirely of ones and zeroes). Although the researchers claim their prototype is "proof of (Feynman's) concept," they say that it may yet take decades of work to develop a practical working device that stores bits as single atoms.

IBM is working on a different approach to nano-storage in their Millipede project.

dhilipkumar

WHAT IS Application Link Enabling

Application Link Enabling (ALE) is a mechanism for the exchange of business data between loosely-coupled R/3 applications built by customers of SAP, the enterprise resource management program. ALE provides SAP customers with a program distribution model and technology that enables them to interconnect programs across various platforms and systems.

There are three layers in the ALE system: application services, distribution services, and communication services. The vehicle for data transfer is called an IDoc (intermediate document), which is a container for the application data to be transmitted. After a user performs an SAP transaction, one or more IDocs are generated in the sending database and passed to the ALE communication layer. The communication layer performs a Remote Function Call (RFC), using the port definition and RFC destination specified by the customer model. The IDoc is transmitted to the receiver, which may be an R/3, R/2, or some external system. If the data is distributed from a master system, the same transaction performed by the sender will be performed by the receiving system, using the information contained in the IDoc.

Changes made to fields in master data tables can be set to trigger distribution of the changes to slave systems, so that multiple database servers can update the same information simultaneously. IDocs carry information directly between SAP systems. In order to communicate with a non-SAP system, an IDoc is first transmitted to an intermediary system that translates the data to a format that will be understood by the receiver. Return data also passes through the translating system, where it is again encapsulated into an IDoc.

dhilipkumar

WHAT IS  aspect-oriented programming

Aspect-oriented programming (AOP) is an approach to programming that allows global properties of a program to determine how it is compiled into an executable program. AOP can be used with object-oriented programming ( OOP ).

An aspect is a subprogram that is associated with a specific property of a program. As that property varies, the effect "ripples" through the entire program. The aspect subprogram is used as part of a new kind of compiler called an aspect weaver .

The conceptualizers of AOP compare aspect programming to the manufacturing of cloth in which threads are automatically interwoven. Without AOP, programmers must stitch the threads by hand.



dhilipkumar

what is 4G


4G is the short term for fourth-generation wireless, the stage of broadband mobile communications that will supercede the third generation (3G). While neither standards bodies nor carriers have concretely defined or agreed upon what exactly 4G will be, it is expected that end-to-end IP and high-quality streaming video will be among 4G's distinguishing features. Fourth generation networks are likely to use a combination of WiMAX and WiFi.
Technologies employed by 4G may include SDR (Software-defined radio) receivers, OFDM (Orthogonal Frequency Division Multiplexing), OFDMA (Orthogonal Frequency Division Multiple Access), MIMO (multiple input/multiple output) technologies, UMTS and TD-SCDMA. All of these delivery methods are typified by high rates of data transmission and packet-switched transmision protocols. 3G technologies, by contrast, are a mix of packet and circuit-switched networks.

When fully implemented, 4G is expected to enable pervasive computing, in which simultaneous connections to multiple high-speed networks provide seamless handoffs throughout a geographical area. Network operators may employ technologies such as cognitive radio and wireless mesh networks to ensure connectivity and efficiently distribute both network traffic and spectrum.

The high speeds offered by 4G will create new markets and opportunities for both traditional and startup telecommunications companies. 4G networks, when coupled with cellular phones equipped with higher quality digital cameras and even HD capabilities, will enable vlogs to go mobile, as has already occurred with text-based moblogs. New models for collaborative citizen journalism are likely to emerge as well in areas with 4G connectivity.

A Japanese company, NTT DoCoMo, is testing 4G communication at 100 Mbps for mobile users and up to 1 Gbps while stationary. NTT DoCoMo plans on releasing their first commercial network in 2010. Other telecommunications companies, however, are moving into the area even faster. In August of 2006, Sprint Nextel announced plans to develop and deploy a 4G broadband mobile network nationwide in the United States using WiMAX. The United Kingdom's chancellor of the exchequer announced a plan to auction 4G frequencies in fall of 2006.

4G technologies are sometimes referred to by the acronym "MAGIC," which stands for Mobile multimedia, Anytime/any-where, Global mobility support, Integrated wireless and Customized personal service.