ANSI C Programming
eBook - ePub

ANSI C Programming

Learn ANSI C step by step

Yashavant Kanetkar

Share book
  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

ANSI C Programming

Learn ANSI C step by step

Yashavant Kanetkar

Book details
Book preview
Table of contents
Citations

About This Book

Learn real-world C programming as per the latest ANSI standardIn this heterogeneous world a program that is compiler dependent is simply unacceptable. ANSI C Programming teaches you C language in such a manner that you are able to write truly portable programs. This book doesn't assume any programming background. It begins with the basics and steadily builds the pace so that the reader finds it easy to handle complicated topics towards the end. Each chapter has been designed to create a deep and lasting impression on the reader's mind. "If taught through examples, any concept becomes easy to gasp". This book follows this dictum faithfully, Yashavant has crafted well thought out programming examples for every aspects of C programming. KEY FEATURES• Learn real-world C programming as per the latest ANSI standard• All programs work on DOS, Windows as well as Linux• Detailed explanation of difficult concepts like "Pointers" and "Bitwise operators"• End of chapter exercises drawn from different universities• Written by best-selling author of Let Us C WHAT WILL YOU LEARNAlgorithms, control instructions, strings, bitwise operators, flowcharts, functionsStructures, enumerations, data types, pointers, unions, dynamic memory allocationStorage classes, arrays, File IO, linked list WHO THIS BOOK IS FORStudents, Programmers, researchers, and software developers who wish to learn the basics of ANSI C Programming.AUTHOR BIOYashavant Kanetkar's programming books have almost become a legend. Through his original works in the form of books and Quest Video courseware CDs on C, C++, Data Structures, VC++,.NET, Embedded Systems, etc. Yashavant Kanetkar has created, moulded and groomed lacs of IT careers in the last decade and half. In recognition of his immense contribution to IT education in India, he has been awarded the "Best.NET Technical Contributor" and "Most Valuable Professional" awards byMicrosoft. His current passion includes Device Driver and Embedded System Programming. Yashavant has recently been honored with a "Distinguished Alumnus Award" by IIT Kanpur for his entrepreneurial, professional and academic excellence. Yashavant holds a BE from VJTI Mumbai and M.Tech. from IIT Kanpur. Yashavant'scurrent affiliations include being a Director of KICIT and KSET. His Linkedin profile: linkedin.com/in/yashavant-kanetkar-9775255

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is ANSI C Programming an online PDF/ePUB?
Yes, you can access ANSI C Programming by Yashavant Kanetkar in PDF and/or ePUB format, as well as other popular books in Informatica & Programmazione in Python. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9789389423006

1 Introduction To Programming

  • Basic Model of Computation
  • Algorithms
  • Flowchart
  • Programming Languages
  • Stages in the Development of a C Program
    Developing the Program
    Compiling the Program
    Linking the Program
    Testing the Program
    Documenting the Program
  • Summary
  • Exercise
Attempting to learn C language before having any idea about the basic model of computation, the way to evolve a solution, the way to represent it and the overall program development process would be like putting a horse before the cart. Hence in this chapter we would focus on these topics and create a solid background before we venture into C programming. Let us begin with the computation model…

Basic Model of Computation

Really speaking, the idea of computing is not new to any of us. Each one of us have used mainly used pencil and paper to do fundamental computing operations like addition, subtraction, multiplication, division or slightly more complex operations like computing lengths, areas, volumes etc. While performing all these computations we follow some definite, unambiguous set of rules. Similarly when we do computation using a computer we follow a certain set of rules. The basic model of computation involves input, process and output. These are shown in Figure 1.1.
Figure 1.1
The input is received using input devices like keyboard, mouse, touch screen, etc. Once the input is received it is processed using the Central Processing Unit (CPU) of the computer and the result is then displayed using the output devices like VDU, printer, plotter, etc. The processing involves performing arithmetic operation and logical comparison operations (like a < b, c >= d, etc.). In addition to this the CPU also controls operations of input / output devices and memory. While solving any problem using a computer we need to explicitly write down the steps keeping the above model of computation in mind. These explicit steps for solving a given computing problem, is called an Algorithm. In Chapter 2 we would be studying and explicitly writing these steps for a variety of problems. For now, let us study the purpose of an algorithm in general.

Algorithms

As mentioned in the previous section, the explicit set of steps for solving a given computing problem is called an algorithm. Thus algorithms are used as a means of communication for specifying solutions to computational problems, unambiguously, so that others can understand the solutions. More precisely, an algorithm is a sequence of instructions that act on some input data to produce some output in a finite number of steps. An algorithm must have the following properties:
(a) Input - An algorithm must receive some input data supplied externally.
(b) Output - An algorithm must produce at least one output as the result.
(c) Finiteness - No matter what is the input, the algorithm must terminate after a finite number of steps. For example, a procedure which goes on performing a series of steps infinitely is not an algorithm.
(d) Definiteness - The steps to be performed in the algorithm must be clear and unambiguous.
(e) Effectiveness - One must be able to perform the steps in the algorithm without applying any intelligence. For example, the step—Select three numbers which form a Pythogorian triplet—is not effective.
While creating an algorithm it is important to choose an appropriate model of computation to describe an algorithm. Based on the choice we make, the type of computations that can be carried out in the model gets decided. For example, if our computational model contains only ruler and compass, then using these primitives we can write down explicit algorithms for drawing a line segment of specific length, bisecting it, drawing an angle, bisecting it, etc. However, using these primitives we would not be able to trisect an angle. For doing this we would need additional primitives like a protractor. For arithmetic computations we can use various computing models like calculators. As you can imagine, with each of these models of computing, the rules for specifying a solution (algorithms) are different. Therefore, it is important to first choose an appropriate model of computation while creating algorithms.
There are two ways in which we can describe an algorithm that is used for solving a problem:
(a) Describe it in the form of step by step procedure written in textual form
(a) Describe it in the form of a figure called Flowchart
We are quite habituated to describing a step by step procedure in textual form. However, when it comes to describing the procedure using a flowchart we need to understand the common rules followed for drawing it. Let us now try to understand them.

Flowchart

A flowchart describes an algorithm by showing the different steps in it as boxes of various shapes connected using arrows to indicate their order. Thus a flowchart gives a diagrammatic representation of a step-by-step solution to a given problem. Flowcharts are extensively used to describe an algorithm, as “a picture is worth a thousand words”. A typical flowchart uses the symbols shown in Figure 1.2 to represent different tasks that are contained in an algorithm.
Figure 1.2
Let us now take some example and try to describe an algorithm for it in both forms— textual and flowchart. Suppose we wish to find out the biggest of three numbers. Given below is an algorithm and flowchart for it, both of which are self-explanatory. It would be a good idea to take any three numbers and try the algorithm and the flowchart on them.
Figure 1.3

Algorithm:

Step 1: Enter three numbers, say a, b, c
Step 2: Check if a > b
Step 3: Print big
Step 4: Exit
Once an algorithm is ready in textual or flowchart form we can easily convert it into a computer program using the grammar rules (syntax) of a programming language. Before we see how this is done, it would be appropriate to know some details about programming languages.

Programming Languages

A Computer understands only 0s and 1s. Hence it has to be instructed through a program only in terms of 0s and 1s. Such a program is called a Machine Language program. As you can appreciate, writing programs in machine language is difficult, tedious and error-prone.
To overcome this difficulty, a language called Assembly Language was invented. In a program written in this language mnemonic symbols (abbreviations) are used instead of 0s and 1s. For example, ADD is used for addition, SUB for subtraction, CMP for comparison, etc. Naturally it is easy to remember these mnemonics as compared to their equivalent 0s and 1s. However, the computer doesn’t understand these mnemonics. Therefore, it is necessary to convert an assembly language program into machine language before it is executed. This translation task is done through a converter program called Assembler.
A language in which each statement or an instruction is directly translated into a single machine instruction is known as a Low-level language. Each mnemonic of an assembly language has a unique machine code. Hence Assembly language is a low-level language. Machine language is also a low-level language. The instructions that one Microprocessor (CPU) can understand are different than those understood by another. This is because internal architecture of each microprocessor is different. Hence the machine language and correspondingly the assembly language for each microprocessor are different. Thus, machine language and assembly language for Intel Pentium microprocessor are different than those for Atmel ATmega32 microprocessor. This means assembly language program written for one microprocessor cannot be used on another microprocessor. In other words it is not portable. Hence, to write an assembly language program, a programmer must have detailed knowledge of the instruction set of the particular microprocessor, its internal architecture, registers, and connection of peripherals to ports etc.
To overcome these difficulties associated with assembly language, High-level languages have been developed. In a high level language instead of mnemonics English-like instructions are used. Instructions in these languages permit programmers to describe tasks in the forms that are problem-oriented rather than machine-oriented. Moreover, one does not have to possess knowledge of the architecture of the microprocessor to be able write programs in these languages. Some of the popular high-level languages include BASIC, FORTRAN, Pascal, COBOL, C, C++, Java, C#, etc. The differences that exist between these languages are beyond the scope of this book. In this book we would be concentrating only on learning one of the most popular and widely used high-level language, namely C.
A program written in a high level language is converted into machine language program using software called Compiler. These compilers are targeted for different microprocessors. For example, the Visual Studio compiler can convert a C language program into machine language instructions that can be understood by Intel microprocessors. Similarly, the gcc complier can convert a C language program into machine language instructions that can be understood by ATmega32.

Stages in the Development of a C Program

One has to go through several stages before a program gets ready to be used on a computer. These stages are discussed below. The entire process is also shown in Figure 1.4 in the form of a flowchart to help understand the process better.

Developing the Program

The task of developing a program for a particular problem involves a careful study of the problem with an aim to clearly identify the output desired from the program and input that would be provided to it. For example, while developing a program to solve a quadratic ...

Table of contents