Old-school programming techniques you probably don't miss

Started by dhilipkumar, May 11, 2009, 10:00 AM

Previous topic - Next topic

dhilipkumar

Old-school programming techniques you probably don't miss

Despite its complexity, the software development process has gotten better over the years. "Mature" programmers remember how many things required manual intervention and hand-tuning back in the day. Today's software development tools automatically perform complex functions that programmers once had to write explicitly. And most developers are glad of it!

Young whippersnappers may not even be aware that we old fogies had to do these things manually. These functions still exist in the software you write, and some specialized programmers, such as Linux kernel developers, continue to apply these techniques by hand. Yet most people depend on the functionality provided by built-in libraries, the operating system or some other part of a development environment.

Let's take a moment to appreciate how much things have improved. For this article, I asked several longtime developers for their top old-school programming headaches and added many of my own to boot. Join in as we wallow in nostalgia and show off our scars -- then tell us about the development techniques you're happy you left behind in the article comments.

Sorting algorithms and other hand-coded fiddly stuff
Applications you write today need to sort data, but all you do is call a sort routine. But as late as the 1980s, programmers were having flame wars about sorting algorithms: Binary trees versus modified bubble sort at 50 paces!

It's not just that developers had to choose an algorithm; we had to write the code anew, every time. Donald Knuth wrote an entire volume about the subject in his "The Art of Computer Programming" series; you weren't a serious programmer unless you at least wanted to buy that book.

Honorable mention:
# Implementing a linked list or hash table yourself
# Hand-coding XML for SOAP deployment
# Choosing direct file access such as sequential, direct access, indexed access

Creating your own graphical user interfaces

Text-mode interfaces once ruled the earth. Until sometime in the 1980s -- and arguably into the '90s -- any developer who wanted to create a windowing system wrote her own windowing abstractions and tested on CGA, EGA and VGA cards, the way you test on Firefox, IE and Safari nowadays (I hope). Or she bought an expensive add-on windowing library and still fiddled with settings for days on end.

No wonder so many "interfaces" consisted of "Enter 1 to Add Records, 2 to Edit Records, 3 to Delete Records.Today, to create a GUI, you use user interface widgets built into your favorite IDE (integrated development environment) or use a stand-alone tool for GUI design. You drag and drop components, call a few functions, and declare some logic. It's magic.

Go To and spaghetti code ...
... or really, any programming methodology before structured programming and object-oriented development came along -- except only prissy academics used words like "methodology" back then, and most of us didn't have a college degree.

In many early programming languages, the application flow was controlled by instructions like GOTO 10, whereupon the code would jump to Label 10, which might or might not appear between 9 and 11, and continue execution. It was frighteningly easy to jump over and into code, creating a Gordian knot of executable code that became a maintenance migraine, giving rise to the pejorative term "spaghetti code."

Structured programming, which suggested that code could be categorized into decisions, sequences and loops, "was a godsend in the '70s ... much better than the 'code now, design later' approach it replaced," says Jay Elkes, who was a Cobol programmer back then. (I must note, however, that the seminal 1968 article "Go To Statement Considered Harmful," penned by structured programming proponent Edsger Dijkstra, spawned entirely too many "...Considered Harmful" parodies.)Newer object-oriented languages, starting with C++ in the early '90s, eliminated the need for structured programming -- which hasn't kept some developers from going full circle to reinvent Go To on their own.


computerworld

dhilipkumar

Manual multithreading and multitasking
Today, we expect our computers to do many things simultaneously through multitasking (running multiple processes on a single operating system) and multithreading (running multiple processes, or threads of execution, in a program). But early PC operating systems such as CP/M and MS-DOS were built to do just one thing at a time.

Brilliant programmers with specialized knowledge could hack alternatives, but it was a messy and painful experience. C programmers, for example, might turn to setjump and longjump to implement threads, which was a recipe for a long weekend of debugging accompanied by at least three pots of coffee.

Multithreading still isn't easy, but development tools and operating systems have streamlined the process. Since the introduction of Windows 2000, for example, developers have been able to use overlapped I/O to enable an auto-scaling multithreading platform, points out Dan Rosanova, principal consultant at Nova Enterprise Systems. That works "much better than the old 'start x threads to do y,'" he notes.

Honorable mention:
Writing Terminate-and-Stay-Resident (TSR) routines, an early and very popular hack toward multitasking, on MS-DOS. "One system call -- a specific group of register settings left in place when you invoked Interrupt 21h -- had the effect of returning control to DOS but leaving your code in allocated heap space," remembers Mark W. Schumann, software developer at Critical Results, who got started in college making things work on a DEC PDP-11.

"If you also redirected, for example, Interrupt 9 -- keyboard handler -- to point at your own code, which then chained back to the original interrupt vector, your code would be executed just before normal handling of the keystroke," continues Schumann. "That's how you wrote 'pop-up' programs back in the day. You had to be careful about file-system operations because DOS itself wasn't re-entrant, and you had to make your code extra-fast so it wouldn't mask the next interrupt."

Self-modifying code
In the 1960s, when memory was measured in "K" (1,024 bytes), programmers did anything to stuff 10 pounds of code into a five-pound bag of computing power. One example was writing programs that altered their own instructions while they were running. For instance, one of my programming friends fit a 7K print driver into 5K by shifting (assembly language) execution one bit to the right. Some of these solutions were breathlessly elegant, but they were a maintenance nightmare.

There were several variations on this theme. One programmer recalled using the contents of one register as data in another register. Another technique was to write stack modifying code as a shortcut around language-generated limitations.

Related item
In dire circumstances, you might need to patch a program while it was still running. This might involve flipping the switches on the computer's front panel (yes, we had those) to modify instructions or data.

Or you might connect to an important daemon -- say a print spooler -- hand-assemble code that you would place in some unused section of memory, and then patch a jump instruction in the active region of the program to branch to your new code. If you made a mistake, boom!Plus, you had to remember to make the same change again in the source code and recompile it, or else you'd end up doing the same thing again the next time the system was rebooted, which could be months later, after you forgot what you'd done.

computerworld

dhilipkumar

Memory management

Until the past decade or so, RAM and storage were amazingly limited, so efficient programming required impressive but time-consuming work-arounds. For example, developers had to spend a lot of time keeping track of memory allocation and deallocation -- a process later known as "garbage collection." Otherwise, they spent a lot more time fixing memory leaks (unintentional memory consumption) that eventually crashed the computer.

Early garbage-collection routines, such as those in the initial Ada compilers, essentially froze the computer while they cleared up memory; that wasn't useful behavior for developers trying to write software for airplane cockpits. Today, just about every development environment has a decent garbage collector.

But PC memory management required even more attention to detail. Instead of using an application programming interface (API) to write to the screen, you'd put characters on the screen (along with their attribute bytes) by memory block copy operations, writing directly to the hardware. On the Apple II, remembers one programmer, you wrote "bit blitters," which combined two or more bitmaps into one, to handle the Apple's strange graphics memory mapping.

Punch cards and other early development environments

Today, your typing speed probably matters more than your keystroke accuracy. That wasn't so during an era (through the 1970s at least) when everything was keypunched on Hollerith cards. Hit the wrong key, and the card was ruined; you had to start over or try to fix it with Scotch tape and tiny pieces of paper.

Most developers learned to include line numbers even if the language didn't need them so cards could be recollated after they were (inevitably) dropped. Cards were available in assorted colors, allowing color coding of different sections of a deck such as JCL (Job Control Language), program source and data. Another trick was to write the program name in magic marker across the deck, giving you a quick visual clue if a card was out of order.

Honorable mention:

Non-WYSIWYG editing platforms. Some of us remain comfortable with vi/emacs, command-line compile options or .nroff for documentation formatting, but initially we programmers didn't have a choice.
Eight-character limits on file names, which sure made it hard to write self-documenting code. The APL keyboard. APL was a great programming language for its time, but its symbols required a special keyboard and were even harder to remember than the most useless Windows icons. Memory dumps. If your code crashed, the mainframe spit out at least 100 pages of green-bar printout showing the entire contents of memory. You had to sift through the entire tedious listing to learn, say, that you had attempted to divide by zero, as Elkes wryly recalls. Expert programmers learned the debugging technique of filling memory with DEADBEEF (a "readable" hexadecimal value) to help them find a core-walker (the mainframe equivalent of a memory leak).



computerworld

dhilipkumar

Pointer math and date conversions

Like sorting and hash algorithms, all math functions were left up to the developer in an era when CPUs were limited (until the '90s). You might need to emulate 16-bit math instructions on an 8-bit processor. You wrote code to determine if the user's PC had an 8087 math co-processor, which gave a serious performance improvement only if code was explicitly written to exploit it. Sometimes you had to use bit-shifting and lookup tables to "guesstimate" floating-point math and trigonometry routines.

It wasn't just straight-up math, either. Every developer had to do date math (what's "three weeks from today"?) using Julian date conversions, including figuring out when Easter falls and adjusting for leap year. (Not that every developer has figured out leap year even now).

Hungarian notation and other language limitations
Credited to Microsoft's Charles Simonyi, who hailed from Hungary, Hungarian notation is a naming convention that was used primarily by those writing for Windows.

It added a prefix to an identifier name to indicate its functional type. For instance, "pX" indicated that the function was a pointer to X. If you changed the identifier's type, you'd have to rename the variable; that caused some ugly patches.

Although some developers continue to use Hungarian notation today, it's generally unnecessary; most text editors instantly tell you the variable type.

Honorable mention:

# Fortran's formatted input and variable names bound to types (variables beginning with I, J and K were always integers)
# BASIC code requiring line numbers
# Initializing all variables to known values
# Null-terminating C strings

Doing strange things to make code run faster

Throughout programming's (relatively) early history, it was common to rely on undocumented features, such as with the early Windows API. Visual Basic programming required poking at Windows' internals, many of which were undocumented hacks. Naturally, these hacks regularly broke every time Microsoft released a new version of Windows, or sometimes just because of a security patch.

Nowadays, everything is well documented and trustworthy. Yeah. Sure it is.

Honorable mention:
# Writing your own utilities to search for where you'd used functions or procedures and where you'd called them
# Manually optimizing the compiler's generated code to meet a project's performance goals
# Using one-letter variable names in a BASIC interpreter because you could see the increase in execution speed

Being patient
It's hard to explain how slowly programming happened back in the day. You started a compile and went to lunch. When compile time took several hours, you wrote as much code as possible and then dove into a multipart debugging session.Unfortunately, as most programmers know, the more things you mess with, the harder it is to find the bug causing the problem. Instead of writing and debugging one routine at a time, it took weeks to get the code written and tested.To speed up development time, programmers would arrive at the office at some ungodly hour in the morning to get dedicated time on the mainframe (if they weren't paying per CPU cycle on a time-sharing system).

The only benefit of the long debug cycles was that it made programmers think about their code rather than slapping it together. And we had time for lunch.

computrworld