Blogger Widgets

The Birth of Programming Language

The first modern programming language is hard to identify, but historians trace the profession of programming back to Ada Lovelace, a mathematician often credited with creating the world’s first computer program in the mid-1800’s. In these early days of computing, languages provided no abstraction from the computer hardware on which the programs would run, and hardware restrictions often defined the language. These earliest computing machines had fixed programs, and changing how these machines behaved was a labor intensive process that involved redesigning the hardware in conjunction with the program it would run. In the 1940’s, general-purpose computers began to emerge that were able to store and load programs in memory. Creating programs for these computers required developers to use first-generation machine code languages. Eventually, second-generation languages, such as Assembly, emerged and provided a symbolic representation for the numeric machine code. Regardless, each language was low-level and cryptic. Developing even trivial programs was error-prone, required a great deal of intellectual ability, and generally took a long time. The Compiler Era The 1950’s gave way to the first modern programming languages—Fortran, LISP, and COBOL—that represent the ancestors of the languages in widespread use today. Each was a higher-level third-generation language that abstracted away the underlying complexities of the hardware environment and allowed developers to create programs using a more consumable and understandable language syntax. Though languages were greatly simplified, adoption of these languages was slow because developers wanted assurance that programs created using these languages was comparable to that of assembly language. As compilers were optimized, adoption of higher-level programming languages progressed. Even at this early stage of language evolution, language capabilities diverged with an emphasis on solving specific types of programming problems. COBOL and Fortran dominated business and scientific computing, respectively. LISP prospered in academia. These early languages, and their emphasis on specialization for solving certain types of programming problems, were an indication of the language evolution to come. The Paradigm Era Language evolution experienced significant innovation in the 1960’s and 1970’s, and many of the major paradigms in use today formed in this period. Software systems were increasing in size, and development was incredibly complex. Maintaining those systems presented additional challenges. Language designers and developers were seeking ways to make complex programming tasks easier by raising the level of abstraction. Discussion of the language effect on software design began to surface, and the use of the GOTO statement and the advantages of structured programming were serious topics of debate. A multitude of languages emerged that supported these new paradigms. Object-oriented programming was born when the Simula programming language was created, and eventually Smalltalk would surface as the first pure dynamically typed, object-oriented language. C and Pascal were created in this era, as was Structured Query Language (SQL). The first functional language, ML, was also invented. A major shift was under way to make programs easier to develop and maintain through new language enhancements and programming paradigms. The 1980’s were a period of far less innovation while emphasis turned toward paradigm and language maturation. Languages such as C++, invented in 1979 and originally called C with Classes, emerged that brought the advantage of object-oriented programming to a language that was strong in systems programming. Modular languages, such as Modula and Ada, began to emerge that helped in developing large-scale systems. Leveraging the capabilities of advanced computer architecture led to compiler advancements that were a precursor to managed code environments. The Productivity Era Without question, the early 1990’s were driven by a motivation to increase developer productivity. Managed runtime environments emerged that removed the burden of memory allocation and deallocation from the developer. Advanced programming environments, such as PowerBuilder, provided an increase in productivity by combining the language and integrated development environment (IDE) into a consolidated product suite. Attempts were made to push languages down the stack, and attention shifted to advanced tooling and frameworks that increased developer productivity. In the mid-1990’s, the World Wide Web (WWW) popularized the use of the Internet as a massively scalable hypermedia system for the easy exchange of information. Soon thereafter, a nascent technology was integrated into the omnipresent Navigator browser. Java technology, and applets specifically, had been officially unveiled to the world as a language whose programs could execute atop a managed environment in a plethora of operating environments. Although the hype surrounding applets fizzled out shortly thereafter, the write once, run anywhere (WORA) promise of Java moved to the server-side as dynamic web applications began to increase in popularity. The portability of Java applications was realized through a Java Virtual Machine (JVM) that managed application execution. Because of this, the language was far simpler than C and C++, because the JVM provided complex memory management capabilities. What was once a complex task to create programs that ran on disparate platforms now required little effort. The simplicity of the language, combined with the portability of applications, was a force that would affect computing for the next decade. The ecosystem surrounding Java technology flourished, and tools and frameworks emerged that made Java programming tasks much easier. Early after the turn of the millennium, .NET arrived along with its Common Language Runtime (CLR), and platform support for programs written in multiple languages was a reality. Initially, many scoffed at the need for a platform supporting multiple languages. Instead, they adopted general-purpose languages and accompanying tools, frameworks, and managed runtime environments, believing they offered the greatest productivity advantages. Even though a multitude of languages existed, organizations rejected many of them in favor of Java and C#. However, platform support for multiple languages proved to eliminate a significant barrier to language adoption. Eventually, developers discovered that frameworks and tools were increasing in complexity and becoming difficult to work with. In many ways, these frameworks and tools were hindering productivity, and the advantages of using a single general-purpose language was called into question. Developers were growing frustrated. Though frameworks and tools might exist that aid a task, learning the framework or tool was as daunting as learning a new language altogether. While the complexity of frameworks and tools continued to increase, developers sought alternative languages that were easier and more productive. The overview goes on to talk about the present day and the postmodern era. The Java and .NET platforms dominate enterprise development, and each support a variety of languages. Platform support has reduced the barrier to entry. Subsequently, a shift is taking place as developers are looking for alternative languages (instead of frameworks, for example) that increase productivity and make programming tasks easier. The postmodern era recognizes that multiple languages are a fact of life. Language evolution will continue, and new languages will emerge that blend aspects from multiple paradigms. The strong distinction made between static and dynamic languages will disappear, and metaprogramming will become more mainstream. Even compilers will change, as developers gain more control, and are able to take advantage of compilers just as they do APIs today. Without question, we live in interesting times, especially if you’re a language geek.
Post a Comment