In Progress
Unit 1, Lesson 1
In Progress

Object Oriented Programming

You probably know what Object Oriented Programming means. OOP means you have classes and instances, inheritance, types, and exceptions. It means bundling behavior with data. It means mutable state, and polymorphic method dispatch.

…or does it?
What if I told you that everything you know about OOP…    …is a distraction?

Video transcript & code

I thought that today we might talk about the new programming fad all the kids are talking about, "object-oriented programming". It's a pretty small topic, but hopefully we can get five minutes out of it.

OK, yes, I'm joking… sort of. After all, like most programmers, I have shelves loaded down with books about object oriented programming. By any measure, OO is a huge topic with dozens of partly-compatible definitions and innumerable nuances and sub-fields.

But it wasn't always that way. In the late 1960s, Alan Kay first created the concept of "Object Oriented Programming". This idea was built on ideas from other systems, including the Simula language, the Sketchpad system, ARPAnet, the Burroughs B5000 computer, well as ideas from cellular biology and mathematics. But while other systems included object or object-like concepts, Kay was the one to distill out an architectural style from these ideas, one which he called "object-oriented".

Kay described the fundamental idea he arrived at as:

…the insight that everything we can describe can be represented by the recursive composition of a single kind of behavioral building block that hides its combination of state and process inside itself and can be dealt with only through the exchange of messages.

The full definition of what it meant for a programming language to be "object-oriented" went through various evolutions as Kay and others iterated on the Smalltalk programming language. But in 2003, when someone asked him if he could boil down the essence of what OOP meant, he put it like this:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

That's it. That is object-oriented programming in a nutshell.

Let's see what this means from a practical standpoint. Here's an object-oriented program, or at least the beginnings of one. It's a very early start at a Markdown parser.

Let's take a look at it, starting from the outside and working in.

First off, we define some sample text in Markdown format. Then we use a send_message procedure from the :world to an object named :lexer. The name of the message is :text, and there is a single argument which is the test string.

text = <<END
This is some *emphasized* text.
And this text is **strong**.

send_message(:world, :lexer, :text, text)

The :lexer object is defined further up. It consists of a block of code, which receives four arguments when it is run. Those arguments are the object's own name; its current state, a selector, which is the name of a message, and any arguments attached to that message.

There are no "methods" as we normally recognize them. Instead, there is one big case statement, which switches on the message selector. There are handlers for various messages, including :init, :text, :line, and so on.

object :lexer do |me, state, selector, args|
  case selector
  when :init
    state.merge(mode: :init, stash: "")
  when :text
    text = args[0]
    send_message(me, me, :line, text.lines.first, text.lines.drop(1))
  when :line
    line, rest = *args
    if rest.any?
      send_message(me, me, :line, rest.first, rest.drop(1))
    send_message(me, me, :char, line.chars.first, line.chars.drop(1))
  when :char
    char, rest = *args
    stash = state[:stash]
    mode  = state[:mode]
    type  = case char
            when /\s/ then :space
            when "*"  then :stars
            when /\S/ then :text
    if type == mode
      stash += char
      send_message(me, :parser, :token, mode, stash)
      stash = char
      mode  = type
    if rest.any?
      send_message(me, me, :char, rest.first, rest.drop(1))
    state.merge(mode: mode, stash: stash)

There is also another object, a :parser. This object is just a placeholder so far; it outputs the messages it receives from the lexer, but it doesn't yet do anything with them.

object :parser do |me, state, selector, args|
  puts "Parser received message #{selector.inspect} with: #{args.inspect}"

What is this object procedure? It is defined very simply. It takes an object name and a block of code defining the object's behavior. It registers the behavior along with some initial state—an empty hash—in a global map. Then it sends the :init message to the new object.

def object(name, &behavior)
  $world[name] = [behavior, {}]
  send_message(:world, name, :init)

The map that objects are registered is nothing special; it's just a hash that starts out empty. Next to it is a global queue for messages.

$world = {} # <object name> => [<behavior>, <state>]
$queue = [] # [<address>, <selector>, [<args...>]]

At the bottom of the program, after queuing up the initial message to the :lexer, we invoke the run procedure. Let's look at the definition of this method.

def run
  while message = $queue.shift
    address, selector, args = *message
    next unless $world[address] # ignore missing objects
    behavior, state         = *$world[address]
    new_state               =, state, selector, args)
    $world[address]         = [behavior, new_state || state]

It loops over the global queue of messages. It shifts each one off, and extracts the destination address, the message selector, and any arguments.

If it can't find the destination, it throws the message away. Otherwise, it looks up the receiver and pulls out the behavior and current state associated with that object. It then invokes the object behavior, with the object name, state, message selector, and message parameters as arguments. It saves the resulting value as a new state for the object, which is then substituted in place of the old state.

How does message sending work? It's a pretty straightforward process of adding them to the global message queue. The only complexity is that sends to self are prioritized over sends to other objects, by unshifting them onto the head of the queue instead of appending them at the end.

def send_message(from, to, selector, *args)
  if from == to
    # prioritize self-sends
    $queue.unshift [to, selector, args]
    $queue << [to, selector, args]

When we run this code, we can see a series of token messages are sent from the :lexer to the parser, informing it of runs of text, whitespace, and markup.


Let's look back at Kay's definition of OOP. First off, we have messaging. In our program, objects talk to each other with send_message. Sometimes the :lexer sends messages to itself. other times, it sends messages to a :parser.

Then we have "local retention and protection and hiding of state-process". Our run procedure handles this. At every time it processes a message, it takes the existing object state, passes it in, and then uses the result as the new state. So state is retained, and objects only interact with their own internal state.

Finally, Kay talks about "extreme late-binding of all things". This is probably the trickiest concept in the whole definition.

In a nutshell, "late-binding" means putting off decisions—particularly, decisions about what names refer to—to the latest point possible. As an example, in this program, messages are sent to symbolic names for objects. The sender doesn't know which concrete object that name is associated with. It doesn't even know how the lookup will be performed, or even whether there IS such an object! It simply puts an address on its message, sends it, and hopes for the best.

In modern OO parlance, we talk a lot about "polymorphism". But in fact polymorphism is just one instance of this general concept of late-binding. Because it uses symbolic names instead of object references, message sends in this program are actually later-bound than the polymorphic method dispatch we normally use in Ruby programs.

Now that we have talked about how this program complies with the definition of OOP, let's talk about some of the things it leaves out.

Some of the concepts you won't find in this program include:

  • Classes
  • Inheritance
  • Types
  • Exceptions

But that's just the surface-level stuff. Let's talk about some of the deeper assumptions this program omits.

For one thing, is no mutable state at the object level. When the lexer code wants to update its own state, it returns a new state hash updated from the old hash. Our runtime is responsible for taking that new state and replacing the old with it. Of course, it could do things differently. For instance, it could keep a record of versions of objects if it wanted to.

An interesting side effect of this is that an object is never in a visibly intermediate state. While it processes a message it comes up with a set of state modifications, which are then applied all at once when the "method" is finished.

Here's another peculiarity. When we think about message sends in modern "OO" languages, we normally equate them with something like function or procedure calls. We expect them to be synchronous. That is, we expect that when the lexer sends itself the :char message, the computer will will interrupt what it is currently doing, drop down into the :char method, and then when it is finished come back and proceed with the method it started from.

But our program doesn't work that way. Messages are sent, but they aren't necessarily processed immediately. Currently, they are queued globally and processed sequentially; but conceivably there are other ways they could be prioritized and handled.

We also tend to think of message sends as being bi-directional. That is, we expect them to always have a return value, even if we don't always care about it.

But in this program, messages only go one way: to the receiver. If the receiver wants to talk back, it will have to send a new message to the sender. There are no return values.

Finally, in this program there are only some named singleton objects. There is no object instantiation, and no management of instances.

And yet, for all that it is idiosyncratic and unconventional, this program is fully object-oriented. In fact, in some ways it's arguably more so than most conventional Ruby programs.

(By the way, it's possible you've noticed that if you squint, this program bears some resemblance to Erlang code. This is not a coincidence. Erlang was partly inspired by the Actor Model by Carl Hewitt, and Hewitt was in turn partly inspired by Alan Kay and early versions of Smalltalk).

As strange as it is, this program doesn't buck convention just for the sake of being weird. What I hope it conveys is that the essence of the object-oriented idea is much smaller and simpler than what we usually think about. And the simplest and smallest concepts are often the most powerful.

In this case, it's the idea that when we structure programs out of little cells that only communicate with using messages, and those cells are also built of cells communicating with messages, and so on all the way down, it becomes very easy to extend, rearrange, and compose our programs.

That's the core of object-oriented thinking. It's an idea that's powerful at both the implementation level and at the architectural level. It's an idea that doesn't even depend on a particular programming language. And despite the shelves of books and the many secondary ideas that have agglomerated around it, in essence it's a very simple idea indeed. Happy hacking!