• RSS
  • Facebook
  • Twitter

Knowledge is Power.

  • Who you are ?

    Working on machines without understanding them ? Then you should be here..

  • Where you are ?

    Geographical location should not become a barrier to Share our knowledge.

  • What do you do ?

    Puzzles and Interview question are intended to be discussed here.

    Showing posts with label compiler. Show all posts
    Showing posts with label compiler. Show all posts

    Sunday, March 28, 2010

    Most compilers recognized the file type by looking at the file extension.

    You might also be able to force the compiler to ignore the file type by supplying compiler switch. In MS VC++ 6, for example, the MSCRT defines a macro, __cplusplus. If you undefine that macro, then the compiler will treat your code as C code. You don't define the __cplusplus macro. It is defined by the compiler when compiling a C++ source. In MSVC6 there's a switch for the compiler, /Tc, that forces a C compilation instead of C++.
    1. Lexical analysis.
    2. Syntactic analysis.
    3. Sematic analysis.
    4. Pre-optimization of internal representation.
    5. Code generation.
    6. Post optimization.
    Linkage is used to determine what makes the same name declared in different scopes refer to the same thing. An object only ever has one name, but in many cases we would like to be able to refer to the same object from different scopes. A typical example is the wish to be able to call printf() from several different places in a program, even if those places are not all in the same source file.

    The Standard warns that declarations which refer to the same thing must all have compatible type, or the behaviour of the program will be undefined. Except for the use of the storage class specifier, the declarations must be identical.


    The three different types of linkage are:


    * external linkage
    * internal linkage
    * no linkage



    In an entire program, built up perhaps from a number of source files and libraries, if a name has external linkage, then every instance of a that name refers to the same object throughout the program. For something which has internal linkage, it is only within a given source code file that instances of the same name will refer to the same thing. Finally, names with no linkage refer to separate things.

    Linkage and definitions

    Every data object or function that is actually used in a program (except as the operand of a sizeof operator) must have one and only one corresponding definition. This "exactly one" rule means that for objects with external linkage there must be exactly one definition in the whole program; for things with internal linkage (confined to one source code file) there must be exactly one definition in the file where it is declared; for things with no linkage, whose declaration is always a definition, there is exactly one definition as well.


    The three types of accessibility that you will want of data objects or functions are:


    * Throughout the entire program,
    * Restricted to one source file,
    * Restricted to one function (or perhaps a single compound statement).



    For the three cases above, you will want external linkage, internal linkage, and no linkage respectively. The external linkage declarations would be prefixed with extern, the internal linkage declarations with static.



    #include

    // External linkage.
    extern int var1;

    // Definitions with external linkage.
    extern int var2 = 0;

    // Internal linkage:
    static int var3;

    // Function with external linkage
    void f1(int a){}

    // Function can only be invoked by name from within this file.
    static int f2(int a1, int a2)
    {
    return(a1 * a2);
    }

    Saturday, February 20, 2010

    Actually Java as well as C# uses Just-In-Time compilation, which is kind of mix of the two. A java compiler is initially converted in a intermediate bytecode that runs in the JVM. The whole discussion on compliler x interpreter is a long debate....

    Speed:
    Up to a few years ago, the speed difference was considerable, a interpreter would run maybe 10-100x slower than the same code if it was compiled. But new technologies makes the speed difference not so relevant except for the most time critical code (the difference today is more like 2X slower for bytecode, and in some benchmarks it is equivalent to the same code in compiled languages like C and C++). Two of the main technologies that made this possible is the pre-compilation to native code of critical parts of the code and also the capacity to adapt the program to the hardware individually: In compiled code the code is put in stone when it is compiled, so if the computer it is running has a more advanced configuration that the one used in the compilation it is not capable of fully use these new features. In Just-in-time code the VM can compile the program to use the full range of resources available.

    Portability :This is where JIT code gains a advantage over compiled code. Lets say you make a program and compile it on a Linux x86 PC, if you want to take this program to a SPARC, ARM, Mainframe or even a Windows or OSX x86 machine, you would at least need to recompile this program (in most cases it is a lot more work than just recompile, because most OS´s are incompatible except for the most basic functions). If you have a new version of the OS, processor, etc... you probably would need to recompile the code.

    This recompilation process can be a real headache when you have thousands of programs running in a lot of different OS and hardware, so a lot of big companies are using Java and other technologies to "future proof" their code. The main question of finding the best combination of speed and portability for each kind of Application.