Saturday, August 20, 2011

CSE 101 LECTURE 5

FOR POWER POINT AND RELEVANT SLIDE MAIL ME AT tanvirfalcon@gmail.com
Slide 2
Computer science:
Computer science is concerned with theory and fundamentals; Software engineering is concerned with the practicalities of developing and delivering useful software. Computer science theories are still insufficient to act as a complete underpinning for Software engineering (unlike e.g. physics and electrical engineering). Computer science theories are currently insufficient to act as a complete underpinning for software engineering, BUT it is a foundation for practical aspects of software engineering
Software engineering is an engineering discipline which is concerned with all aspects of software production. Software engineers should
adopt a systematic and organised approach to their work use appropriate tools and techniques depending on the problem to be solved, the development constraints and the resources available. Software engineering is part of this process concerned with developing the software infrastructure, control, applications and databases in the system. So this is a part of System engineering. System engineers are involved in system specification, architectural design Integration.
System Engineering is concerned with all aspects of computer-based systems development including hardware, software and process engineering. Systems engineering signifies both an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to simply formalize the approach and in doing so, identify new methods and research opportunities similar to the way it occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor.
Slide - 8
Waterfall Model
The waterfall model is a popular version of the systems development life cycle model for software engineering. Often considered the classic approach to the systems development life cycle, the waterfall model describes a development method that is linear and sequential. Waterfall development has distinct goals for each phase of development. Imagine a waterfall on the cliff of a steep mountain. Once the water has flowed over the edge of the cliff and has begun its journey down the side of the mountain, it cannot turn back. It is the same with waterfall development. Once a phase of development is completed, the development proceeds to the next phase and there is no turning back.
System Feasibility: Defining a preferred concept for the software product, and determining its life-cycle feasibility and superiority to alternative concepts.
Requirements: A complete, verified specification of the required functions, interfaces, and performance for the software product. Product
Design: A complete verified specification of the overall hardware-software architecture, control structure, and data structure for the product, along with such other necessary components as draft user's manuals and test plans.
Detailed Design: A complete verified specification of the control structure, data structure, interface relations, sizing, key algorithms, and assumptions of each program component.
Coding: A complete, verified set of program components. Integration: A properly function software product composed of the software components.
Implementation: A fully functioning operational hardware-software system, including such objectives as program and data conversion, installation, and training.
Maintenance: A fully functioning update of the hardware-software system repeated for each update.
Phaseout: A clean transition of the functions performed by the product to its successors.
Advantages
If in the beginning of the project failures are detected, it takes less effort (and therefore time and money) for this error. In the waterfall model phases to be properly sealed first before proceeding to the next stage. It is believed that the phases are correct before proceeding to the next phase. In the waterfall model lay the emphasis on documentation. In the newer software development methodologies makes it less documentation. This means that when new people in the project, and people leave it is difficult to transfer knowledge. This disadvantage is not the traditional waterfall model. It is a straightforward method. The way of working ensures that there are specific phases. This tells you what stage it is. One can use this method of milestones. Milestones can be used to monitor the progress of the project to estimate. The waterfall model is well known. Many people have experienced, so there might be easy to work. When frequent portions of the software product to be delivered this gives the customer confidence, but also the software development team.
Disadvantages
There are some disadvantages of this way to develop software. Many software projects are dependent on external factors. The client is a very important external factor. Often the requirements over the course of the project change, because the client wants something different. It is a disadvantage that the waterfall model assumes that the requirements will not change during the project. When a requirement changes in the construction phase, a substantial number of phases made again. It is very difficult to time and cost estimate. The phases are very large, it is therefore very difficult to estimate how much each step cost. In a number of new methods are almost all aspects of a software development process included. One can think of planning techniques, project management methods and how the project should be organized. In many software projects, different people at different stages of the project. For example: the designers and builders. They all have a different view of the project as designers look at the project differently than the builders. Conversely, the builders often different from the design of the designers look than the designers themselves. Frequently, the design will be adjusted again. Here is the waterfall model is not made for that. Within the project the team members often specialized. One team member will be only the first phase involved the design, while the only builders in construction helping to build the project. This can lead to waste of different sources. The main source is the time. An example: the designers are working on perfecting the design. The builders are in principle already start building, but because they work with the waterfall model, they should wait until the first phase is complete. This is a typical example of wasted time. Testing is done only in one of the last phases of the project. In many other software development methods will be tested once a certain part and finished product is at last an integration test. Because so much emphasis on documentation, the waterfall model is not efficient for smaller projects. There’s too much effort to the project itself around in terms of documentation.
Slide – 10
Advantages:
Risk reduction mechanisms are in place
Supports iteration and reflects real-world practices
Systematic approach
Disadvantages:
Requires expertise in risk evaluation and reduction
Complex, relatively difficult to follow strictly
Applicable only to large systems
Applicability:
Internal development of large systems

CSE 101 LECTURE 4

FOR POWER POINT AND RELEVANT SLIDE MAIL ME AT tanvirfalcon@gmail.com
Slide-3
Simultaneous Access
• In organizations, many people may need to use the same data or programs. A network solves this problem.
• Shared data and programs can be stored on a central network server. A server that stores data files may be called a file server.
• Managers may assign access rights to users. Some users may only be able to read data, others may be able to make changes to existing files.
Shared Peripheral Devices
• Because peripheral (external) devices like printers can be expensive, it is cost-effective to connect a device to a network so users can share it.
• Through a process called spooling, users can send multiple documents (called print jobs) to a networked printer at the same time. The documents are temporarily stored on the server and printed in turn.
Personal Communication
• One of the most common uses of networks is for electronic mail (e-mail).
• An e-mail system enables users to exchange written messages (often with data files attached) across the local network or over the Internet.
• Two other popular network-based communications systems are teleconferencing and videoconferencing.
Easier Backup
• Networks enable managers to easily back up (make backup copies of) important data.
• Administrators commonly back up shared data files stored on the server, but may also use the network to back up files on users' PCs.
Slide-4
Server based networking
Server based networking provides a central location for management, backups, updates, anti-virus management, anti-spam management, and anti-spyware management. Server based networking can also provide you with monitoring of employee activities suchas internet usage, web pages viewed and instant messenger use.
• A network in which all client computers use a dedicated central server computer for network functions such as storage, security and other resources.
• A server has a large hard disk for shared storage. It may provide other services to the nodes, as well.
In a file server network, nodes can access files on the server, but not necessarily on other nodes.
mainframe
Mainframe computers are typically large, metal boxed computers with large processing abilities. The terminals are called "dumb terminals" because they only send and receive data, leaving the processing to the mainframe
Client / Server Network
The client/server refers to the way two computer programs interact with each other.
The client makes a request from the server, who then fulfills the request. Although this idea can be used on one computer it is an efficient way for a network of computers in different locations to interconnect.
Advantages
In most cases, a client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. This independence from change is also referred to as encapsulation.
All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.
Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each "peer" in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.
Many mature client-server technologies are already available which were designed to ensure security, 'friendliness' of the user interface, and ease of use.
It functions with multiple different clients of different capabilities.
Disadvantages
Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become severely overloaded. Contrast that to a P2P network, where its bandwidth actually increases as more nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.
The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download.
Specific types of clients include web browsers, email clients, and online chat clients.
Specific types of servers include web servers, ftp servers, application servers, database servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers.
LANs
• LAN stands for Local Area Network.
• The first LANs were created in the late 1970s.
• LANs are small networks constricted to a small area like a house, office, or city.
• A LAN is a network whose computers are located relatively near one another.
• The nodes may be connected by a cable, infrared link, or small transmitters.
• A network transmits data among computers by breaking it into small pieces, called packets.
• Every LAN uses a protocol – a set of rules that governs how packets are configured and transmitted.
• LANs are used to share resources like storage, internet,etc.
• A 'node' on a LAN is a connected computer or device like a printer.
WANs
• WAN stands for Wide Area Network.
• WANs are very large networks that interconnect smaller LAN networks, for a large geographic area like a country(i.e., any network whose communications links cross metropolitan, regional, or national boundaries.) [9]
• WANs are usually for private companies, however, some built by internet service providers connect LANs to the internet.
• WAN can use a combination of satellites, microwave, and link and variety of computers from mainframes to terminals.
• A 'node' on a WAN is a LAN.
Personal Area Network (PAN)
A personal area network (PAN) is a computer network used for communication among computer devices.

PAN Usually, uses short range wireless technology is used to connect devises like a cell phone and a PDA.
Home Area Network (HAN)
HAN is short for Home Area Network. HAN is a recently coined term for a small LAN in a home environment, used for lifestyle purposes.
Home Area Network uses cable, wired, or wireless connections to connect a homes' digital devices. For example, fax machines, computers, DVD's etc.
Garden Area Network (GAN)
A similar system such as HAN would be GAN which stands for Garden Area Network, and allows for one system to control such devices like garden lights, sprinkler systems and alarm systems
Slide-8
In peer-to-peer networking there are no dedicated servers or hierarchy among the computers. All of the computers are equal and therefore known as peers. Normally each computer serves as Client/Server and there is no one assigned to be an administrator responsible for the entire network.

Peer-to-peer networks are good choices for needs of small organizations where the users are allocated in the same general area, security is not an issue and the organization and the network will have limited growth within the foreseeable future.

The term Client/server refers to the concept of sharing the work involved in processing data between the client computer and the most powerful server computer.
Slide-9
Network Interface Cards (NIC)
A network card, network adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.
Repeaters
A repeater is an electronic device that receives a signal and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair ethernet configurations, repeaters are required for cable runs longer than 100 meters away from the computer.
Hubs
A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for transmission. When the packets are copied, the destination address in the frame does not change to a broadcast address. It does this in a rudimentary way, it simply copies the data to all of the Nodes connected to the hub.[3]
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.
Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
Local bridges: Directly connect local area networks (LANs)
Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.
Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
Switches
A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the MAC addresses in the packets. This is distinct from a hub in that it only forwards the datagrams to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports, with the intention being that most or all of the network is connected directly to the switch, or another switch that is in turn connected to a switch.
Switch is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI model layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.
Routers
Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home (and even office) use, have been integrated with routers to allow multiple home/office computers to access the Internet through the same connection. Many of these new devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11b/g wireless enabled devices to connect to the network without the need for cabled connections.
Slide – 12
Intranet
An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.
Extranet
An extranet is a network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.
INTERNET
• The Internet was created by the Advanced Research Projects Agency (ARPA) and the U.S. Department of Defense for scientific and military communications.
• The Internet is a network of interconnected networks. Even if part of its infrastructure was destroyed, data could flow through the remaining networks.
• The Internet uses high-speed data lines, called backbones, to carry data. Smaller networks connect to the backbone, enabling any user on any network to exchange data with any other user.
The Internet is a worldwide, publicly accessible series of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol. It is a "network of networks" that consists of millions of smaller networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked web pages and other resources of the World Wide Web (WWW). So the Internet is a collection of interconnected computer networks, linked by copper wires, fiber-optic cables, wireless connections, etc. In contrast, the Web is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The World Wide Web is one of the services accessible via the Internet, along with various others including e-mail, file sharing, online gaming and others.
Common uses
E-mail
The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Even today it can be important to distinguish between Internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them.
The World Wide Web
Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous.
The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies, of these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet.
Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.
Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer and Firefox, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations.
Using the Web, it is also easier than ever before for individuals and organisations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however.
Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organisations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work.

Remote access
The Internet allows computer users to connect to other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements.
This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice.
An office worker away from his desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office.
This concept is also referred to by some network security people as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into its employees' homes; this has been the source of some notable security breaches, but also provides security for the workers.
Collaboration
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate and test, but the wide reach of the Internet allows such groups to easily form in the first place, even among niche interests. An example of this is the free software movement in software development, which produced GNU and Linux from scratch and has taken over development of Mozilla and OpenOffice.org (formerly known as Netscape Communicator and StarOffice).
Internet "chat", whether in the form of IRC "chat rooms" or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be sent and viewed even more quickly and conveniently than via e-mail. Extension to these systems may allow files to be exchanged, "whiteboard" drawings to be shared as well as voice and video contact between team members.
Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to add their thoughts and changes.
File sharing
A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks.
In any of these cases, access to the file may be controlled by user authentication; the transit of the file over the Internet may be obscured by encryption, and money may change hands before or after access to the file is given. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—hopefully fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests.
These simple features of the Internet, over a worldwide basis, are changing the basis for the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
Internet collaboration technology enables business and project teams to share documents, calendars and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing.
Streaming media
Many existing radio and television broadcasters provide Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of material is much wider, from pornography to highly specialized, technical webcasts. Podcasting is a variation on this theme, where—usually audio—material is first downloaded in full and then may be played back on a computer or shifted to a digital audio player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material on a worldwide basis.
Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, the traffic at a local roundabout or their own premises, live and in real time. Video chat rooms, video conferencing, and remote controllable webcams are also popular. Many uses can be found for personal webcams in and around the home, with and without two-way sound.
YouTube, sometimes described as an Internet phenomenon because of the vast amount of users and how rapidly the site's popularity has grown, was founded on February 15, 2005. It is now the leading website for free streaming video. It uses a flash-based web player which streams video files in the format FLV. Users are able to watch videos without signing up; however, if users do sign up they are able to upload an unlimited amount of videos and they are given their own personal profile. It is currently estimated that there are 64,000,000 videos on YouTube, and it is also currently estimated that 825,000 new videos are uploaded every day.
Voice telephony (VoIP)
VoIP stands for Voice over IP, where IP refers to the Internet Protocol that underlies all Internet communication. This phenomenon began as an optional two-way voice extension to some of the instant messaging systems that took off around the year 2000. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the actual voice traffic, VoIP can be free or cost much less than a normal telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL.
Thus, VoIP is maturing into a viable alternative to traditional telephones. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP modems are now available that eliminate the need for a PC.
Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls.
Remaining problems for VoIP include emergency telephone number dialing and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the electronics.
Most VoIP providers offer unlimited national calling, but the direction in VoIP is clearly toward global coverage with unlimited minutes for a low monthly fee.
VoIP has also become increasingly popular within the gaming world, as a form of communication between players. Popular gaming VoIP clients include Ventrilo and Teamspeak, and there are others available also. The PlayStation 3 and Xbox 360 also offer VoIP chat features.
Slide -13
Network Topology Network topology are the physical layout of the network that the locations of the computers and how the cable is run between them. It is important to use the right topology. Each topology has its own strengths and weakness
Mesh Topology
• It is also called a point-to-point topology.
• Each device is connected directly to all other network devices.
• In a mesh topology, every device in the network is physically connected to every other device in the network. A message can be sent on different possible paths from source to destination. Mesh topology provides improved performance and reliability. Mesh networks are not used much in local area networks. It is mostly used in wide area networks.
Advantages
The use of dedicated link guarantees that each connection can carry its own data load. It eliminates traffic problem.
If one link becomes unusable, it does not harm the entire system.
It is easy to troubleshoot.
Disadvantages
A full mesh network can be very expensive.
It is difficult to install and reconfigure.
Bus Topology
• It is a multipoint topology.
• Each device shares the connection.
• Only one device at a time can send.
• Bus topology is the cheapest way of connecting computers to form a workgroup or departmental LAN, but it has the disadvantage that a single loose connection or cable break can bring down the entire LAN
Termination is important issue in bus networks. The electrical signal from a transmitting computer is free to travel the entire length of the cable. Without the termination, when the signal reaches the end of the wire, it bounces back and travels back up the wire. When a signal echoes back and forth along an unterminated bus, it is called ringing. The terminators absorb the electrical energy and stop the reflections.
Advantages:
Bus is easy to use and understand and inexpensive simple network
It is easy to extend a network by adding cable with a repeater that boosts the signal and allows it to travel a longer distance.
Disadvantage:
A bus topology becomes slow by heavy network traffic with a lot of computer because networks do not coordinate with each other to reserve times to transmit.
It is difficult to troubleshoot a bus because a cable break or loose connector will cause reflections and bring down the whole network.
Ring Topology
• It is a circle with no ends.
• Packets are sent from one device to the next
Advantages :
One computer cannot monopolize the network.
It continue to function after capacity is exceeded but the speed will be slow.
Disadvantages :
Failure of one computer can affect the whole network.
It is difficult to troubleshoot.
Adding and removing computers disrupts the network.
Star topology
• All devices are connected to a central device.
• The hub receives and forwards packets.
Advantages :
The failure of a single computer or cable doesn't bring down the entire network.
The centralized networking equipment can reduce costs in the long run by making network management much easier.
It allows several cable types in same network with a hub that can accommodate multiple cable types.
Disadvantages :
Failure of the central hub causes the whole network failure.
It is slightly more expensive than using bus topology.
Hybrid Topology
• Variations of two or more topologies.
• Star and Bus
• Star and Ring
Slide-15
Collection of interrelated data
Set of programs to access the data
DMBS contains information about a particular enterprise
DBMS provides an environment that it both convenient and efficient to use
Physical level: describes how a record (e.g. customer) is stored.
Logical level: describes data stored in database, and the relationships among the data.
type customer = record
name: string;
street: string;
city: integer;
end;
View level: application programs hide details of data types. Views can also hide information (e.g. salary) for security purposes.
Slide-16
A schema is the overall design of the database. It describes the data contents, structure and some other aspects of the database also called the intension of the database
The instance is the collection of data stored in the database at a particular time, also called the extension of the database.
Slide-17
A transaction is a collection of operations that performs a single logical function in a database application.
Transaction-management component ensures that the database remains in a consistent (correct) state despite system failures (e.g. power failures and operating system crashes) and transaction failures.
Concurrency-control manager controls the interaction among the concurrent transactions, to ensure the consistency of the database.
A storage manager is a program module that provides the interface between the low-level data stored in the database and the application programs and queries submitted to the system.
The storage manager is responsible for the following tasks:
Interaction with the file manager
Efficient storing, retrieving, and updating of data
Slide- 18
Users are differentiated by the way they expect to interact with
the system
Application programmers – interact with system through DML calls
Sophisticated users – form requests in a database query language
Specialized users – write specialized database applications that do not fit into the traditional data processing framework
Naïve users – invoke one of the permanent application programs that have been written previously
– Examples, people accessing database over the web, bank tellers, clerical staff
Slide-20
File System Data Management
Requires extensive programming in third-generation language (3GL)
Time consuming
Makes ad hoc queries impossible
Leads to islands of information
Data Dependence
Change in file’s data characteristics requires modification of data access programs
Must tell program what to do and how
Makes file systems cumbersome from programming and data management views
Structural Dependence
Change in file structure requires modification of related programs
Field Definitions and Naming Conventions
Flexible record definition anticipates reporting requirements
Selection of proper field names important
Attention to length of field names
Use of unique record identifiers
Data Redundancy
Different and conflicting versions of same data
Results of uncontrolled data redundancy
Data anomalies
Modification
Insertion
Deletion
Data inconsistency
Lack of data integrity
A good DBMS performs the following functions
maintain data dictionary
support multiple views of data
enforce integrity constraints
enforce access constraints
support concurrency control
support backup and recovery procedures
support logical transactions
Purpose of Database System
Built on top of file systems
Drawbacks of using file systems:
Atomicity of updates
Concurrent access by multiple users
Security problems
Database systems offer solutions to all the above problems
Slide -21
E-R model of real world
Entities (objects)
E.g. customers, accounts, bank branch
Relationships between entities
E.g. Account A-101 is held by customer Johnson
Relationship set depositor associates customers with accounts
Widely used for database design
Database design in E-R model usually converted to design in the relational model (coming up next) which is used for storage and processing

CSE 101 LECTURE 2

FOR POWER POINT AND RELEVANT SLIDE MAIL ME AT tanvirfalcon@gmail.com
SLIDE-4
KeyBoard:
In computing, a keyboard is an input device partially modeled after the typewriter keyboard which uses an arrangement of buttons, or keys which act as electronic switches. A keyboard typically has characters engraved or printed on the keys, and each press of a key typically corresponds to a single written symbol. However, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys or simultaneous key presses can produce actions or computer commands.
SLIDE- 6
Mouse
An input device that allows an individual to control a mouse pointer in a graphical user interface (GUI). Utilizing a mouse a user has the ability to perform various functions such as opening a program or file and does not require the user to memorize commands, like those used in a text-based environment such as MS-DOS.
When and who invented the first computer mouse?
The computer mouse as we know it today was invented and developed by Douglas Englebart during the 60's and was patented on November 17, 1970. While creating the mouse Douglas was working at the Stanford Research Institute, a think tank sponsored by Stanford University and originally referred to the mouse as a "X-Y Position Indicator for a Display System." This mouse was first used with the Xerox Alto computer system in 1973. However, because of its lack of success the first widely used mouse is credited to being the mouse found on the Apple Lisa computer. Today, the mouse is now found and used on every computer.
The above picture taken by Maracin Wichary at the New Mexico Museum of Natural History and Science is an example of what the first computer mouse looked liked. As can be seen by the picture the mouse was much larger than today's mouse, square, and had a small button in the top right corner.

World First Trackball Mouse
The world's first trackball invented by Tom Cranston, Fred Longstaff and Kenyon Taylor working on the Royal Canadian Navy's DATAR project in 1952. It used a standard Canadian five-pin bowling ball. It was not patented, as it was a secret military project.
Using the mouse involve five techniques.
1. Pointing; Move the mouse to move the on-screen pointer.
2. Clicking; Press and release the left mouse button once.
3. Double-clicking; Press and release the left mouse button twice.
4. Dragging; Hold down the left mouse button as you move the pointer.
5. Right-clicking; Press and release the right mouse button.


Trackballs
• A trackball is like a mouse turned upside-down.
• Use your thumb to move the exposed ball and your fingers to press the buttons.

Trackpads
• A track pad is a touch-sensitive pad that provides the same functionality as a mouse.
• To use a track pad, you glide your finger across its surface.
• Track pads provide a set of buttons that function like mouse buttons.

Integrated pointed Devices
• An integrated pointing device is a small joystick built into the keyboard.
• To use an integrated pointing device, you move the joystick.
• These devices provide a set of buttons that function like mouse buttons
SLIDE- 7
Digital Camera
A type of camera that stores the pictures or video it takes in electronic format instead of to film. There are several features that make digital cameras a popular choice when compared to film cameras. First, the feature often enjoyed the most is the LCD display on the digital camera. This display allows users to view photos or video after the picture or video has been taken, which means if you take a picture and don't like the results, you can delete it; or if you do like the picture, you can easily show it to other people. Another nice feature with digital cameras is the ability to take dozens, sometimes hundreds of different pictures. To the right is a picture of the Casio QV-R62, a 6.0 Mega Pixel digital camera used to help illustrate what a digital camera may look like.
Digital cameras have quickly become the camera solution for most users today as the quality of picture they take has greatly improved and as the price has decreased. Many users however are hesitant in buying a digital camera because of the inability of getting their pictures developed. However, there are several solutions in getting your digital pictures developed. For example, there are numerous Internet companies capable of developing your pictures and send you your pictures in the mail. In addition, many of the places that develop your standard cameras film now have the ability to develop digital pictures if you bring them your camera, memory stick, and/or pictures on CD.

SLIDE- 9

A touchscreen is a display which can detect the presence and location of a touch within the display area. The term generally refers to touch or contact to the display of the device by a finger or hand. Touchscreens can also sense other passive objects, such as a stylus. However, if the object sensed is active, as with a light pen, the term touchscreen is generally not applicable. The thumb rule is: if you can interact with the display using your finger, it is likely a touchscreen - even if you are using a stylus or some other object.
Up until recently, most touchscreens could only sense one point of contact at a time, and few have had the capability to sense how hard one is touching. This is starting to change with the emergence of multi-touch technology - a technology that was first seen in the early 1980s, but which is now appearing in commercially available systems.
The touchscreen has two main attributes. First, it enables you to interact with what is displayed directly on the screen, where it is displayed, rather than indirectly with a mouse (computing) or touchpad. Secondly, it lets one do so without requiring any intermediate device, again, such as a stylus that needs to be held in the hand. Such displays can be attached to computers or, as terminals, to networks. They also play a prominent role in the design of digital appliances such as the personal digital assistant (PDA), satellite navigation devices and mobile phone
SLIDE- 10
Monitor
1. Also called a video display terminal (VDT) a monitor is a video display screen and the hard shell that holds it. In its most common usage, monitor refers only to devices that contain no electronic equipment other than what is essentially needed to display and adjust the characteristics of an image.
SLIDE- 11
CRT Monitors
Sort for cathode-ray tubes, CRT monitors were the only choice consumers had for monitor technology for many years. Cathode ray tube (CRT) technology has been in use for more than 100 years, and is found in most televisions and computer monitors. A CRT

CSE 101 LECTURE 1

FOR POWER POINT AND RELEVANT SLIDE MAIL ME AT tanvirfalcon@gmail.com

CSE 101 LECTURE 1 SLIDE 9
User Interface
The User interface is what you see when you turn on the computer. It Consists of the cursors, prompts, icons, menus, etc

Operating system provides these facilities for the user:
Program creation : editors, debuggers, other development tools.
Program execution : load, files, IO operations.
Access to IO devices: Read and writes.
Controlled access to files: protection mechanisms, abstraction of underlying device.
System access: Controls who can access the system.
Error detection and response: external, internal, software or hardware error.
Accounting: Collect stats., load sharing , for billing purposes.
Resource Management
Processors : Allocation of processes to processors, preemption, scheduling.
Memory: Allocation of main memory.
IO devices : when to access io devices, which ones etc.
Files: Partitions, space allocation and maintenance.
Applications, Data, objects.
Task Management
• The OS uses interrupt requests (IRQs) to maintain organized communication with the CPU and other pieces of hardware.
• Each hardware device is controlled by a piece of software, called a driver, which allows the OS to activate and use the device.
• The operating system provides the software necessary to link computers and form a network.
File Management
• The operating system keeps track of all the files on each disk.
• Users can make file management easier by creating a hierarchical file system that includes folders and subfolders arranged in a logical order.
Security
 When sharing resources, protection of the systems and user resources from intentional as well as inadvertent misuse.
 Protection generally deals with access control. Ex: Read only file
 Security deals usually with threats from outside the system that affects the integrity and availability of the system and information with the system.
 Example: username, password to access system. Data encryption to protect information.
Utilities
 A utility is a program that performs a task that is not typically handled by the operating system.
 Some utilities enhance the operating system's functionality.
 Some of the major categories of utilities include:
• File defragmentation
• Data compression
• Backup
• Antivirus
• Screen savers

Slide 10
Command Driven Interface
With a command-driven interface, you type in an instruction which is usually abbreviated, in order to get something done. Command Driven user interface is are not easy to use. If you are new to the software then you have to remember many commands in order to be able to use the software quickly.
 Some older operating systems, such as DOS and UNIX, use command-line interfaces.
 In a command-line interface, you type commands at a prompt.
 Under command-line interfaces, individual applications do not need to look or function the same way, so different programs can look very different
Menu Driven Interface
This type of user interface produce a list of commands or options available within a program and the user can make a selection by useing either a mouse or a keyboard. Bothe MS Windows and Machintosh programs are menu driven.
Graphical User Interface:
 Most modern operating systems, like Windows and the Macintosh OS, provide a graphical user interface (GUI).
 A GUI lets you control the system by using a mouse to click graphical objects on screen.
 A GUI is based on the desktop metaphor. Graphical objects appear on a background (the desktop), representing resources you can use.
SLIDE 11
GUI Tools
 Icons are pictures that represent computer resources, such as printers, documents, and programs.
 You double-click an icon to choose (activate) it, for instance, to launch a program.
 The Windows operating system offers two unique tools, called the taskbar and Start button. These help you run and manage programs.
Applications and the Interface
 Applications designed to run under one operating system use similar interface elements.
 Under an OS such as Windows, you see a familiar interface no matter what programs you use.
 In a GUI, each program opens and runs in a separate window—a frame that presents the program and its documents.
 In a GUI, you can run multiple programs at once, each in a separate window. The application in use is said to be the active window.
Menus
 GUI-based programs let you issue commands by choosing them from menus.
 A menu groups related commands. For example, the File menu's commands let you open, save, and print document files.
 Menus let you avoid memorizing and typing command names.
 In programs designed for the same GUI, menus and commands are similar from one program to another
Dialoge Box
 A dialog box is a special window that appears when a program or the OS needs more information before completing a task.
 Dialog boxes are so named because they conduct a "dialog" with the user, asking the user to provide more information or make choices.
WIMP
WIMP stands for windows, icons, menus and pointing devices. The term describes the features of a graphical user interface which make it easier for the user to get things done.

SLIDE-13
Basic Services
• The operating system manages all the other programs that run on the PC.
• The operating system provides services to programs and the user, including file management, memory management, and printing
• To provide services to programs, the OS makes system calls—requesting other hardware and software resources to perform tasks.
Sharing Information
• Some operating systems, such as Windows, enable programs to share information.
• You can create data in one program and use it again in other programs without re-creating it.
• Windows provides the Clipboard, a special area that stores data cut or copied from one document, so you can re-use it elsewhere.
Multi-tasking
• Multitasking is the capability of running multiple processes simultaneously.
• A multitasking OS lets you run multiple programs at the same time.
• Through multitasking, you can do several chores at one time, such as printing a document while downloading a file from the Internet.
• There are two types of multitasking: cooperative and preemptive.
Slid-14
System Software :
System Software refers to the operating system and all utility programs that manage computer resources at a low level. Systems software includes compilers, loaders, linkers, and debuggers.
System Software can be devided in major 3 criteria
1. System Management Program
2. System Support Program
3. System Development Program
System Management Program
1. Operating System
2. Network Management
3. Device Driver
System Sopport Program
1. System Utility Program
2. System Performance monitor program
3. System Security Monitor Program
System Development Program
1. Programming Language Translator
2. Programming editor and tools
3. Computer Aided Software Engineering (CASE)
Application Software
Applications software comprises programs designed for an end user, such as word processors, database systems, and spreadsheet programs.
Application Software can be divided in major criteria
General Application Program
Specific Application Program
General Application Program
Some of the General Application Program are
Software Suite – MS Office, Lotus smart Suit, Coral Word Perfect Office
Web Browser – Internet Explorer, Netscape, Opera.
Electronic Mail - e-mail, Eudora, etc
Desktop publishing - Page maker, Publisher
Database Management System – Oracle, Access, dBase
Specific Application Program
Some of the Specific Application Program are: Accounting Software, Sales Management, E-commerce, Inventory control, Pay roll system, Ticket reservation, etc
SLIDE-24
Concise industry history of Supercomputer
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985–1990). Cray, himself, never used the word "supercomputer," a little-remembered fact in that he only recognized the word "computer." In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.
The Cray-2 was the world's fastest computer from 1985 to 1989.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's normal computer. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel become the standard. Typical numbers of processors were in the range 4–16. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs. (This is commonly and humorously referred to as the attack of the killer micros in the industry.) Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the PowerPC, Itanium, or x86-64, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Software tools
Software tools for distributed processing include standard APIs such as MPI and PVM, and open source-based software solutions such as Beowulf and openMosix which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers. Technology like ZeroConf (Rendezvous/Bonjour) can be used to create ad hoc computer clusters for specialized software such as Apple's Shake compositing application. An easy programming language for supercomputers remains an open research topic in computer science.
Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research (including research into global warming), molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and the like. Major universities, military agencies and scientific research laboratories are heavy users.
A particular class of problems, known as Grand Challenge problems, are problems whose full solution requires semi-infinite computing resources.
Relevant here is the distinction between capability computing and capacity computing, as defined by Graham et al. Capability computing is typically thought of as using the maximum computing power to solve a large problem in the shortest amount of time. Oftentimes a capability system is able to solve a problem of a size or complexity that no other computer can. Capacity computing in contrast is typically thought of as using efficient cost-effective computing power to solve somewhat large problems or many small problems or to prepare for a run on a capability system.
Hardware and software design
Processor board of a CRAY YMP vector computer
Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times—in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
As with all highly parallel systems, Amdahl's law applies, and supercomputer designs devote great effort to eliminating software serialization, and using hardware to accelerate the remaining bottlenecks.
Supercomputer challenges, technologies
A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.
Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many meters across must have latencies between its components measured at least in the tens of nanoseconds. Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason: hence the cylindrical shape of his Cray range of computers. In modern supercomputers built of many conventional CPUs running in parallel, latencies of 1-5 microseconds to send a message between CPUs are typical.
Supercomputers consume and produce massive amounts of data in a very short period of time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.
Technologies developed for supercomputers include:
Vector processing
Liquid cooling
Non-Uniform Memory Access (NUMA)
Striped disks (the first instance of what was later called RAID)
Parallel filesystems
Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in specialist high-performance applications. Vector processing techniques have trickled down to the mass market in DSP architectures and SIMD processing instructions for general-purpose computers.
Modern video game consoles in particular use SIMD extensively and this is the basis for some manufacturers' claim that their game machines are themselves supercomputers. Indeed, some graphics cards have the computing power of several TeraFLOPS. The applications to which this power can be applied was limited by the special-purpose nature of early video processing. As video processing has become more sophisticated, Graphics processing units (GPUs) have evolved to become more useful as general-purpose vector processors, and an entire computer science sub-discipline has arisen to exploit this capability: General-Purpose Computing on Graphics Processing Units (GPGPU.)
Operating systems
Supercomputers predominantly run some variant of Linux or UNIX. Linux is the most popular since 2004
Supercomputer operating systems, today most often variants of Linux or UNIX, are every bit as complex as those for smaller machines, if not more so. Their user interfaces tend to be less developed, however, as the OS developers have limited programming resources to spend on non-essential parts of the OS (i.e., parts not directly contributing to the optimal utilization of the machine's hardware). This stems from the fact that because these computers, often priced at millions of dollars, are sold to a very small market, their R&D budgets are often limited. (The advent of Unix and Linux allows reuse of conventional desktop software and user interfaces.)
Interestingly this has been a continuing trend throughout the supercomputer industry, with former technology leaders such as Silicon Graphics taking a back seat to such companies as NVIDIA, who have been able to produce cheap, feature-rich, high-performance, and innovative products due to the vast number of consumers driving their R&D.
Historically, until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers to this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of UNIX operating system variants (such as Cray's Unicos and today's Linux.)
For this reason, in the future, the highest performance systems are likely to have a UNIX flavor but with incompatible system-unique features (especially for the highest-end systems at secure facilities).
Programming
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Special-purpose Fortran compilers can often generate faster code than C or C++ compilers, so Fortran remains the language of choice for scientific programming, and hence for most programs run on supercomputers. To exploit the parallelism of supercomputers, programming environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are being used.
Modern supercomputer architecture
The Columbia Supercomputer at NASA's Advanced Supercomputing Facility at Ames Research Center
As of November 2006, the top ten supercomputers on the Top500 list (and indeed the bulk of the remainder of the list) have the same top-level architecture. Each of them is a cluster of MIMD multiprocessors, each processor of which is SIMD. The supercomputers vary radically with respect to the number of multiprocessors per cluster, the number of processors per multiprocessor, and the number of simultaneous instructions per SIMD processor. Within this hierarchy we have:
A computer cluster is a collection of computers that are highly interconnected via a high-speed network or switching fabric. Each computer runs under a separate instance of an Operating System (OS).
A multiprocessing computer is a computer, operating under a single OS and using more than one CPU, where the application-level software is indifferent to the number of processors. The processors share tasks using Symmetric multiprocessing(SMP) and Non-Uniform Memory Access(NUMA).
An SIMD processor executes the same instruction on more than one set of data at the same time. The processor could be a general purpose commodity processor or special-purpose vector processor. It could also be high performance processor or a low power processor.
As of November 2006, the fastest machine is Blue Gene/L. This machine is a cluster of 65,536 computers, each with two processors, each of which processes two data streams concurrently. By contrast, Columbia is a cluster of 20 machines, each with 512 processors, each of which processes two data streams concurrently.
As of 2005, Moore's Law and economies of scale are the dominant factors in supercomputer design: a single modern desktop PC is now more powerful than a 15-year old supercomputer, and the design concepts that allowed past supercomputers to out-perform contemporaneous desktop machines have now been incorporated into commodity PCs. Furthermore, the costs of chip development and production make it uneconomical to design custom chips for a small run and favor mass-produced chips that have enough demand to recoup the cost of production. A current model quad core Xeon workstation running at 2.66Ghz will outperform a multimillion dollar cray C90 supercomputer used in the early 1990s, lots of workloads requiring such a supercomputer in the 1990s can now be done on workstations costing less than 4000 US dollars.
Additionally, many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously) and, particularly, fairly coarse-grained parallelization that limits the amount of information that needs to be transferred between independent processing units. For this reason, traditional supercomputers can be replaced, for many applications, by "clusters" of computers of standard design which can be programmed to act as one large computer.
Special-purpose supercomputers
Special-purpose supercomputers are high-performance computing devices with a hardware architecture dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing generality. They are used for applications such as astrophysics computation and brute-force codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6 was faster than the Earth Simulator in 2002 for a particular special set of problems.
Examples of special-purpose supercomputers:
Deep Blue, for playing chess
Reconfigurable computing machines or parts of machines
GRAPE, for astrophysics and molecular dynamics
Deep Crack, for breaking the DES cipher
The fastest supercomputers today
Measuring supercomputer speed
The speed of a supercomputer is generally measured in "FLOPS" (FLoating Point Operations Per Second), commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-,combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) This measurement is based on a particular benchmark which does LU decomposition of a large matrix. This mimics a class of real-world problems, but is significantly easier to compute than a majority of actual real-world problems.
Current fastest supercomputer system
Roadrunner is a supercomputer built by IBM at the Los Alamos National Laboratory in New Mexico, USA. Currently the world's fastest computer, Price - US$133-million = TK 919,99,99,977.00
Courtesy: Wikipedia.com
SLIDE-27
Mainframes (often colloquially referred to as Big Iron) are computers used mainly by large organizations for critical applications, typically bulk data processing such as census, industry and consumer statistics, ERP, and financial transaction processing.
The term probably originated from the early mainframes, as they were housed in enormous, room-sized metal boxes or frames. [1] Later the term was used to distinguish high-end commercial machines from less powerful units which were often contained in smaller packages.
Today in practice, the term usually refers to computers compatible with the IBM System/360 line, first introduced in 1965. (IBM System z9 is IBM's latest incarnation.) Otherwise, systems with similar functionality but not based on the IBM System/360 are referred to as "servers." However, "server" and "mainframe" are not synonymous (see client-server).
Some non-System/360-compatible systems derived from or compatible with older (pre-Web) server technology may also be considered mainframes. These include the Burroughs large systems and the UNIVAC 1100/2200 series systems. Most large-scale computer system architectures were firmly established in the 1960s and most large computers were based on architecture established during that era up until the advent of Web servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide Web for details.)
There were several minicomputer operating systems and architectures that arose in the 1970s and 1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a minicomputer operating system; Unix has scaled up over the years to acquire some mainframe characteristics.)
Many defining characteristics of "mainframe" were established in the 1960s, but those characteristics continue to expand and evolve to the present day.

Description
Modern mainframe computers have abilities not so much defined by their single task computational speed (flops or clock rate) as by their redundant internal engineering and resulting high reliability and security, extensive input-output facilities, strict backward compatibility for older software, and high utilization rates to support massive throughput. These machines often run for years without interruption, with repairs and even software and hardware upgrades taking place during normal operation. For example, ENIAC remained in continuous operation from 1947 to 1955. More recently, there are several IBM mainframe installations that have delivered over a decade of continuous business service as of 2007, with upgrades not interrupting service. Mainframes are defined by high availability, one of the main reasons for their longevity, as they are used in applications where downtime would be costly or catastrophic. The term Reliability, Availability and Serviceability (RAS) is a defining characteristic of mainframe computers.
In the 1960s, most mainframes had no interactive interface. They accepted sets of punch cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. Teletype devices were also common, at least for system operators. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds or thousands of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Many mainframes supported graphical terminals (and terminal emulation) by the 1980s (if not earlier). Nowadays most mainframes have partially or entirely phased out classic user terminal access in favor of Web user interfaces.
Historically mainframes acquired their name in part because of their substantial size and requirements for specialized HVAC and electrical power. Those requirements ended by the mid-1990s, with CMOS mainframe designs replacing the older bipolar technology. In fact, in a major reversal, IBM touts the mainframe's ability to reduce data center energy costs for power and cooling and reduced physical space requirements compared to server farms.
Characteristics of mainframes
Nearly all mainframes have the ability to run (or "host") multiple operating systems and thereby operate not as a single computer but as a number of virtual machines. In this role, a single mainframe can replace dozens or even hundreds of smaller servers, reducing management and administrative costs while providing greatly improved scalability and reliability.
Mainframes can add system capacity nondisruptively and granularly. Modern mainframes, notably the IBM zSeries and System z9 servers, offer three levels of virtualization: logical partitions (LPARs, via the PR/SM facility), virtual machines (via the z/VM operating system), and through its operating systems (notably z/OS with its key-protected address spaces and sophisticated goal-oriented workload scheduling, but also Linux and Java). This virtualization is so thorough, so well established, and so reliable that most IBM mainframe customers run no more than two machines: one in their primary data center, and one in their backup data center (fully active, partially active, or on standby, in case there is a catastrophe affecting the first building). All test, development, training, and production workload for all applications and all databases can run on a single machine, except for extremely large demands where the capacity of one machine might be limiting. Such a two mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
Mainframes are designed to handle very high volume input and output (I/O) and emphasize throughput computing. Since the mid-1960's, mainframe designs have included several subsidiary computers (called channels or peripheral processors) which manage the I/O devices, leaving the CPU free to deal only with high-speed memory. It is common in mainframe shops to deal with massive databases and files. Giga-record or tera-record files are not unusual. Compared to a typical PC, mainframes commonly have hundreds to thousands of times as much data storage online, and can access it much faster.
Mainframe return on investment (ROI), like any other computing platform, is dependent on its ability to scale, support mixed workloads, reduce labor costs, deliver uninterrupted service for critical business applications, and several other risk-adjusted cost factors. Some argue that the modern mainframe is not cost-effective. Hewlett-Packard and Dell unsurprisingly take that view at least at times, and so do a few independent analysts. Sun Microsystems used to take that view but, beginning in mid-2007, started promoting its new partnership with IBM, including probable support for the company's OpenSolaris operating system running on IBM mainframes. The general consensus (held by Gartner and other independent analysts) is that the modern mainframe often has unique value and superior cost-effectiveness, especially for large scale enterprise computing. In fact, Hewlett-Packard also continues to manufacture its own mainframe (arguably), the NonStop system originally created by Tandem. Logical partitioning is now found in many high-end UNIX-based servers, and many vendors are promoting virtualization technologies, in many ways validating the mainframe's design accomplishments.
Mainframes also have unique execution integrity characteristics for fault tolerant computing. System z9 servers execute each instruction twice, compare results, and shift workloads "in flight" to functioning processors, including spares, without any impact to applications or users. This feature, also found in HP's NonStop systems, is known as lock-stepping, because both processors take their "steps" (i.e. instructions) together. Not all applications absolutely need the assured integrity that these systems provide, but many do, such as financial transaction processing.
Despite these differences, the IBM mainframe, in particular, is still a general purpose business computer in terms of its support for a wide variety of popular operating systems, middleware, and applications.
Market
As of early 2006, IBM mainframes dominate the mainframe market at well over 90% market share, however IBM is not the only vendor. Unisys manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. Fujitsu's Nova systems are rebranded Unisys ES7000's. Hitachi co-developed the zSeries 800 with IBM to share expenses. Hewlett-Packard sells its unique NonStop systems, which it acquired with Tandem Computers, and Groupe Bull's DPS mainframes are available in Europe. Unisys and HP increasingly rely on commodity Intel CPUs rather than custom processors in order to reduce development expenses, while IBM has its own large research and development organization to introduce new, homegrown mainframe technologies.
History
Several manufacturers produced mainframe computers from the late 1950s through the 1970s. At this time they were known as "IBM and the Seven Dwarfs": Burroughs, Control Data, General Electric, Honeywell, NCR, RCA, and UNIVAC. IBM's dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes. The latter architecture has continued to evolve into their current zSeries/z9 mainframes which, along with the then Burroughs and now Unisys MCP-based mainframes, are among the few mainframe architectures still extant that can trace their roots to this early period. That said, while they can still run 24-bit System/360 code, the 64-bit zSeries and System z9 CMOS servers have nothing physically in common with the older systems. The larger of the latter IBM competitors were also often referred to as "The BUNCH" from their initials (Burroughs, UNIVAC, NCR, CDC, Honeywell). Notable manufacturers outside the USA were Siemens and Telefunken in Germany, ICL in the United Kingdom, and Fujitsu, Hitachi, Oki, and NEC in Japan. The Soviet Union and Warsaw Pact countries manufactured close copies of IBM mainframes during the Cold War; the Strela is an example of an independently designed Soviet computer.
Shrinking demand and tough competition caused a shakeout in the market in the early 1980s — RCA sold out to UNIVAC and GE also left; Honeywell was bought out by Bull; UNIVAC became a division of Sperry, which later merged with Burroughs to form Unisys Corporation in 1986. In 1991, AT&T briefly owned NCR. During the same period, companies found that servers based on microcomputer designs could be deployed at a fraction of the acquisition price and offer local users much greater control over their own systems given the IT policies and practices at that time. Terminals used for interacting with mainframe systems were gradually replaced by personal computers. Consequently, demand plummeted and new mainframe installations were restricted mainly to financial services and government. In the early 1990s, there was a consensus among industry analysts that the mainframe was a dying market as mainframe platforms were increasingly replaced by personal computer networks.
That trend started to turn around in the late 1990s as corporations found new uses for their existing mainframes and as the price of data networking collapsed in most parts of the world. The growth of e-business also dramatically increased the number of back-end transactions processed by mainframe software as well as the size and throughput of databases. Another factor currently increasing mainframe use is the development of the Linux operating system, which can run on many mainframe systems, typically in virtual machines. Linux allows users to take advantage of open source software combined with mainframe hardware RAS. Rapid expansion and development in emerging markets, particularly China, is also spurring major mainframe investments to solve exceptionally difficult computing problems, e.g. providing unified, extremely high volume online transaction processing databases for 1 billion consumers across multiple industries (banking, insurance, credit reporting, government services, etc.)
Mainframes vs. supercomputers
The distinction between supercomputers and mainframes is not a hard and fast one, but supercomputers generally focus on problems which are limited by calculation speed while mainframes focus on problems which are limited by input/output and reliability ("throughput computing") and on solving multiple business problems concurrently (mixed workload). The differences and similarities include:
Both types of systems offer parallel processing. Supercomputers typically expose it to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.
Supercomputers are optimized for complicated computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.
Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.
Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power.
There has been some blurring of the term "mainframe," with some PC and server vendors referring to their systems as "mainframes" or "mainframe-like." This is not widely accepted and the market generally recognizes that mainframes are genuinely and demonstrably different.
Statistics
An IBM zSeries 800 (foreground, left) running Linux.
Historically 85% of all mainframe programs were written in the COBOL programming language. The remainder included a mix of PL/I (about 5%), Assembly language (about 7%), and miscellaneous other languages. eWeek estimates that millions of lines of net new COBOL code are still added each year, and there are nearly 1 million COBOL programmers worldwide, with growing numbers in emerging markets. Even so, COBOL is decreasing as a percentage of the total mainframe lines of code in production because Java, C, and C++ are all growing faster. Even then COBOL remains to be the most widely used language for developement in mainframe environment as it is most suited for business logic programming which is why the mainframes are really deployed.
Mainframe COBOL has recently acquired numerous Web-oriented features, such as XML parsing, with PL/I following close behind in adopting modern language features.
90% of IBM's mainframes have CICS transaction processing software installed.[2] Other software staples include the IMS and DB2 databases, and WebSphere MQ and WebSphere Application Server middleware.
As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe. Many are running Linux, some exclusively. There are new z/OS customers as well, frequently in emerging markets and among companies looking to improve service quality and reliability.
In May, 2006, IBM claimed that over 1,700 mainframe customers are running Linux. Nomura Securities of Japan spoke at LinuxWorld in 2006 and is one of the largest publicly known, with over 200 IFLs in operation that replaced rooms full of distributed servers.
Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.
Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.), equivalent to approximately 50% of the full annual cost of only one IT employee. The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.[3]
Speed and performance
The CPU speed of mainframes has historically been measured in millions of instructions per second (MIPS). MIPS have been used as an easy comparative rating of the speed and capacity of mainframes. The smallest System z9 IBM mainframes today run at about 26 MIPS and the largest about 17,801 MIPS. IBM's Parallel Sysplex technology can join up to 32 of these systems, making them behave like a single, logical computing facility of as much as about 569,632 MIPS.[4]
The MIPS measurement has long been known to be misleading and has often been parodied as "Meaningless Indicator of Processor Speed." The complex CPU architectures of modern mainframes have reduced the relevance of MIPS ratings to the actual number of instructions executed. Likewise, the modern "balanced performance" system designs focus both on CPU power and on I/O capacity, and virtualization capabilities make comparative measurements even more difficult. See benchmark (computing) for a brief discussion of the difficulties in benchmarking such systems. IBM has long published a set of LSPR (Large System Performance Reference) ratio tables for mainframes that take into account different types of workloads and are a more representative measurement. However, these comparisons are not available for non-IBM systems. It takes a fair amount of work (and maybe guesswork) for users to determine what type of workload they have and then apply only the LSPR values most relevant to them.
To give some idea of real world experience, it is typical for a single mainframe CPU to execute the equivalent of 50, 100, or even more distributed processors' worth of business activity, depending on the workloads. Merely counting processors to compare server platforms is extremely perilous.
Courtesy: Wikipedia.com

Monday, August 8, 2011

FINAL SUGGESTION FOR CSE 101


Objective: From Lecture 1-5 and the whole book 1.a-13.b

Chapter 1
1. List the four parts of a computer system.
2. Identify four types of computer hardware
3. Differentiate the two main categories of computer software
4. Identify two unique features of supercomputers.
5. Describe a typical use for mainframe computers
6. Differentiate workstations from personal computers
7. Name four components found in most graphical user interfaces.
8. Describe the operating system’s role in running software programs.
9. Name five types of utility software.
10. What is OS? List the four primary functions of an OS.
11. What is GUI? What is software? Classify with example.
Chapter 2
1. List the two most commonly used types of computer monitors.
2. Explain how a CRT monitor displays images.
3. List four characteristics you should consider when comparing monitors.
4. List the four criteria you should consider when evaluating printers.
5. Describe how a dot matrix printer creates an image on a page.
6. Explain the process by which a laser printer operates.
7. Explain how data is stored on the surface of magnetic and optical disks.
8. Explain the difference between RAM and ROM.
9. List three hardware factors that affect processing speed

Chapter 3

1. Jacob likes to read. Last month he made a goal to read at least (466)8 pages from his book collection each week. How many pages did he read after (10110)2 weeks? He told his mother it would be equivalent to reading the (8E1)16 page Harry Potter and the Deathly Hallows (011)2 times. Was he correct? Explain your answer.
2. Cimorene spent an afternoon cleaning and organizing the dragon’s treasure. One fourth of the items she sorted were jewelry. (3C %)16 of the remainder were potions, and the rest were magic swords. If there were (110000)2 magic swords, how many pieces of treasure did she sort in all?
3. In Brac University randomly pick 3 clubs where they ask them to submit their annual profit for year 2007-2008. Drama Club, Computer Club and Business club submitted their annual profit as follows.
a. Drama Club – (E7BA.A)16
b. Computer Club - (163672.636)8
c. Business Club - (1110000110010.1101)2

Then Brac University decided to give all of them TK. 1485.6125. So they intend to write a check according to the format that the club uses to submit there annual balance. Write what amount they write on there checks according to the club format and what is the total of all 3 club annual profit in decimal. To get full credit show your steps.
4. From 2003 Till 2007 Bhuiyan Group of Industries donates WHO (World Health Organization). At the end of year 2008 hey waned o know how much Total money they donate so far. And based on the total amount they wanted to donate 50% of the total donation to year 2008. How much money in decimal hey will donate at year 2008?

Year Amount
2003 (ABC7.D)16
2004 (1010110110010.1101)2
2005 (73672.636)8
2006 (67252.500)8
2007 (D7E7.D)16
5. Addition and conversation.
• (6F.3C)16 + (203.25)8 = (????)16
• (A1D)16 + (346.74)8 = (????)8
• (4C.B7)16 + (123.46)8 = (????)10
• (9D.AB6)16 + (306.51)8 = (????)2
• (AD.BD)16+ (603.46)8+ (952.99)10+ (11011011.1101)2 = (????)10

6. Faraz, Kabir and faruqe made a club called “The Junto” to read books and discuss ideas. Ben read (232)8 science books. He read (E)16 times as many history books as science books. How many more history books than science books did he read? What is the total amount of book did he read? How many more book he need to read to complete total (11100000011)2?
7. Faraz collected donations for many worthy organizations. He had (2467.AD)16 Taka in a bank account to start a new hospital. OMAR gave him another (11010011.011000)2 Taka. How much more money must Faraz collect from kabir if he needs (EAEE)16 Taka for the hospital in decimal?
8. The day before the great battle at the Black Gate, a company of (1C2)16 orcs camped among the host of Mordor. But an argument broke out over dinner, and 1/3 of them were killed. Then 2/5 of the remainder died when a drunken troll stumbled through their camp during the night. (1101)2 of them run away from the battle field. How many of the orcs left to join the morning’s battle?
9. Draw flowchart for printing Sequence: 1, 3, 5, 9… N
10. Draw flowchart for printing Sequence: 2, 4, 6, 8… N
11. draw flowchart for printing Sequence: 1,-1, 2,-2, 3,-3……….n.-n
12. Write a flowchart where it prints all the number till N which divisible by 5 or 10 or 3
13. draw flowchart for printing Sequence: ½, 2/4, 3/6, 4/8 …n/2n
14. draw a flowchart for GPA calculation for N number of class.
15. draw a flowchart for printing he sequence of 1,-3, 5,-7 ...n
16. draw a flowchart for Fibonacci series.(ex: 0,1,1,2,3,5,8,13.....Nnth where Nnth = N(nth-1)+N(nth-2)
17. Draw a flowchart for N factorial series. (Ex: 5! =5*4*3*2*1)
18. draw flowchart for printing Sequence: 1+2+3+4+...+N or Write pseudo code and draw flowchart for printing Sequence: 2+4+6+8+…. +N OR 1+3+5+9+...+N

Chapter 4
1. What is Computer Network? What is WAN? What is LAN? What is MAN?
2. What is topology? How many types of topologies are there? Explain them with diagram.
3. What Topologies are appropriate for LAN?
4. What is the different between WAN and MAN?
5. What is the difference between data and information?
6. If you have to talk to a robot, name the devices and techniques that need to be in the robot to achieve the interaction requirement.
7. What are Intranet, Extra net and Internet? Explained there uses.
8. What is Database? Differentiate between database and Database Management Systems.
9. Describe different Levels of Abstraction.
10. What are DDL and DML?
11. What is the difference between procedural and non-procedural language?
12. Define Instances and Schemas. What is the role of database administrator?
13. What is File System Data Management?
14. What is data Dependency? What is Data Redundancy?
15. What is the difference between Relational Database and Non-Relational Database?
16. Why database management system is important from file management system?
Chapter 5
1. What is Software? What is software Engineering, computer Science and System Engineering?
2. What is the difference between software Engineering and computer Science and system engineering?
3. What are the generic activities of all software processing?
4. What are the attribute of good software? Explain any 5.
5. Write down the steps of waterfall model. What are the disadvantages of waterfall model?
6. What is process Model? What are the weaknesses of process Model?
7. What is the professional and ethical responsibility as a Software Engineer? Give 2 responsibilities that every software engineer should follow.
8. What is the code of ethics that engineers should follow?
9. What is Computer Virus? Give at least 5 category of virus type?
10. How does virus affect us? How we can prevent our computer from viruses?
11. What is Data Theft? How we can prevent our data from Hackers?
12. What is Integrity? What is cryptography?

Encrypt or Decrypt this using confusion Method
• Key 11
O Plaintext:
 No one has yet realized the wealth of sympathy, the kindness and generosity hidden in the soul of a child. The effort of every true education should be to unlock that treasure.
O Cipher Text:
 YZ ZYP SLD JPE CPLWTKPO ESP HPLWES ZQ DJXALESJ, ESP VTYOYPDD LYO RPYPCZDTEJ STOOPY TY ESP DZFW ZQ L NSTWO. ESP PQQZCE ZQ PGPCJ ECFP POFNLETZY DSZFWO MP EZ FYWZNV ESLE ECPLDFCP.

• Key 13
o Plaintext:
 Education is not the filling of a pail, but the lighting of a fire.

O Cipher Text:
 RQHPNGVBA VF ABG GUR SVYYVAT BS N CNVY, OHG GUR YVTUGVAT BS N SVER.

• Key 17
O Plaintext:
 It is the mark of an educated mind to be able to entertain a thought without accepting it.
O Cipher Text:
 ZK ZJ KYV DRIB FW RE VULTRKVU DZEU KF SV RSCV KF VEKVIKRZE R KYFLXYK NZKYFLK RTTVGKZEX ZK.

• Key 23
o Plaintext:
 Education is simply the soul of a society as it passes from one generation to another.
o Cipher Text:
 BARZXQFLK FP PFJMIV QEB PLRI LC X PLZFBQV XP FQ MXPPBP COLJ LKB DBKBOXQFLK QL XKLQEBO.

• Key 05
o Plaintext:
 The roots of education are bitter, but the fruit is sweet.
o Cipher Text:
 YMJ WTTYX TK JIZHFYNTS FWJ GNYYJW, GZY YMJ KWZNY NX XBJJY.

• Key 05
o Plaintext:
 Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.
o Cipher Text:
 UFWY TK YMJ NSMZRFSNYD TK YMJ HTRUZYJW NX YMFY, TSHJ NY NX HTRUJYJSYQD UWTLWFRRJI FSI BTWPNSL XRTTYMQD, NY NX HTRUQJYJQD MTSJXY.

• Key 06
o Plaintext:
 The Internet is not just one thing, it's a collection of things - of numerous communications networks that all speak the same digital language.

o Cipher Text:
 ZNK OTZKXTKZ OY TUZ PAYZ UTK ZNOTM, OZ'Y G IURRKIZOUT UL ZNOTMY - UL TASKXUAY IUSSATOIGZOUTY TKZCUXQY ZNGZ GRR YVKGQ ZNK YGSK JOMOZGR RGTMAGMK..


• Key 09
o Plaintext:
 Treat your password like your toothbrush. Don't let anybody else use it, and get a new one every six months.
o Cipher Text:
 CANJC HXDA YJBBFXAM URTN HXDA CXXCQKADBQ. MXW'C UNC JWHKXMH NUBN DBN RC, JWM PNC J WNF XWN NENAH BRG VXWCQB.


140. Encrypt or Decrypt this using Diffusions Method.
Key 05
i. Plaintext:
i. The roots of education are bitter, but the fruit is sweet.
j. Cipher Text L1
i. TOETRTTRST
ii. HTDIEETUS
iii. ESUOBRHIW
iv. ROCNIBETE
v. OFAATUFIE
k. Cipher Text L2
i. ESUOBRHIW
ii. HTDIEETUS
iii. OFAATUFIE
iv. ROCNIBETE
v. TOETRTTRST

P-Box
Level 1 1 2 3 4 5
Level2 5 2 1 4 3




THE COMPUTING FIELD IS ALWAYS IN NEED OF NEW CLICHES.

TTDSFH HIIINE ENSNES CGANW OFLEC MIWEL PEADT ULYOC

CGANW ENSNES HIIINE MIWEL OFLEC PEADT TTDSFH ULYOC


Cipher Text: CGANW ENSNES HIIINE MIWEL OFLEC PEADT TTDSFH ULYOC

P-Box
Level 2 1 2 3 4 5 6 7 8
Level 1 7 3 2 1 5 4 6 8