Embarrassingly parallel problems – increasing efficiency
“The way to get started is to quit talking and begin doing.” - Walt Disney
During the summer, as a means of differentiating holiday from home schooling, my family planned Disney week. The aim was to take advantage of the free trial period Disney Plus had advertised, binge on as many classic films as possible during a rainy few days, and then become so saccharine-saturated that the rest of the holiday away from technology might be a blessing.
As it turned out the free trial period had ended, the weather wasn't as poor as expected, and there were far too many great films to squeeze into a week. A week became a month, and we only scratched the celluloid surface.
Walt Disney received a record 59 Oscar nominations for his films before his death at the age of 65. This was an impressive achievement in scale and productivity (although of course he didn't do it all himself!)
Working from home has been many things – both a gift and a grind; an obligation but also an opportunity. In particular I have been considering how to be more productive with my time, to ensure that I work both effectively and efficiently.
The Covid-19 crisis has led many sectors to rethink how they operate. For example, the NHS Reset campaign seeks to explore what the health and care sector should look like in the future. McKinsey & Company shared strategies for CEOs to restart to “ensure safe and successful relaunch of economic activity”. And Tearfund’s Reboot campaign seeks to take the opportunity to address issues of inequality in society. Barry Carpenter has applied this thinking to education through his recovery curriculum. “Readjust”, “rethink”, and “rebuild” are amongst many verbs that currently reverberate.
The one area of my working life most frequently in need of reboot or recovery is, of course, my laptop. We’ve probably all considered this when faced with Windows’ hourglass or macOS’s “spinning beach ball of death”. How do computers tackle increasing demands on their workload? Are there any lessons from computer design we can learn to help us become more efficient?
Von Neumann architecture
Most computers use a design known as “Von Neumann architecture”. (BBC Bitesize has a summary here.) The basic principle is that a computer carries out instructions in a sequence, one at a time. A central processing unit (CPU) fetches data, and then executes the relevant instruction (e.g. adding two numbers together). Data is stored in memory, and each piece of data must be retrieved from memory in order through a process known as “pipelining”. The cycle is controlled by an internal clock, which keeps all operations in synchronisation.
Traditionally there are two methods under this architecture to improve performance:
- Improve the pipelining process
- Increase the clock speed
In a workplace environment, increasing the clock speed equates to “work faster”. Whilst this might apply to some colleagues, in most situations this will not necessarily lead to improved effectiveness.
Alternatively, to improve the pipelining process, it might be more efficient for the CPU to fetch the subsequent data while simultaneously executing the previous instruction. You can imagine a manager sending for various employees in sequence to bring back information about which to make a decision, and making this practice more efficient by having each colleague ready to follow one from another. It’s still, however, a very autocratic style, depending both on synchronisation and health of the central processor.
More recent CPU design makes use of multi-core processors. In these systems, several (perhaps 6) processors operate simultaneously on different sets of data. (In an organisation, think of processors as senior leaders.) As long as the remits for these processors are kept separate, this design can increase efficiency substantially. The danger here is that one processor executes an instruction on data that another processor is already accessing (like when a colleague overwrites the changes you have just made in a GoogleDoc).
The trick is to ensure that multi-core processors don’t interfere by clearly partitioning the data. Each processor has its own distinct workload. The CPU might also check which of these processors are active at any one time, and dynamically redistribute the workload.
Effective managers operate instinctively in this way. They ensure that employees fully understand their level of autonomy. They monitor workload, reassigning work as necessary, and ensure that boundaries and guidelines are clearly communicated. They find and address situations of idleness, and they delegate tasks with clear expectations, and ensure that the outcomes of operations are communicated back at the appropriate level.
The shift here is a from a single manager (operator) to a team in which each member communicates effectively with the manager. There is an increase in the distribution of activity leading to increased efficiency, but also a necessary cost in communication. Might this shift be taken further still?
Imagine that a computer is required to blur the faces on an image. This can be done by looking for individual pixels in the image with a range of colours approximating to skin tones. The colours of these pixels could be rendered as an average of the surrounding 100 or so pixels, creating a blurred effect.
This process could be carried out either sequentially (by one processor, a pixel at a time), or through multi-core processing (with each processor operating on a section of the image). The process itself is simple, and could equally by completed using thousands of processors operating on one pixel each.
This is an example of an embarrassingly parallel problem (as in "an embarrassment of riches"). The operations are so simple, and so easy to demarcate, that a large task can be undertaken very efficiently given enough parallel resources.
Schools have recently, for example, been deciding how best to ensure that students disinfect their hands between lessons. One option would be for the headteacher to supervise hand washing at a single sink with a single long line of children. Whilst effective, this would most certainly not be efficient (1800 students at 20 seconds per student would take 10 hours!). Even increasing the number of sinks to 6, supervised by all members of the leadership team, would still mean the hand-washing took longer than the subsequent lesson. The solution, of course, is for students to carry their own hand gel and sanitise separately. It’s embarrassingly simple.
Is there a limit to the number or type of operations that can be carried out in this way? As instructions become more complicated, they take more time (and energy) to undertake, and are therefore at greater risk of interference. Some processor designs, such as in ARM’s Reduced Instruction Set Computer, prioritise clock speed over instruction complexity, reducing the number of types of operation undertaken but completing these more efficiently. As in any system, this is a simple cost/benefit analysis.
Lessons in leadership
In leadership, particularly in educational leadership, the lessons for efficiency and effectiveness learned from Von Neumann architecture might be summarised as:
- Work quickly, but beware the limitations
- Use others to undertake activities, but ensure their responsibilities are clearly partitioned
- Identify activities that are embarrassingly parallel, and can be completed by many without interference (e.g. “Tuck your shirt in!”)
- Ensure the balance between centralisation and distribution of leadership is fluid, communicated and appropriate to the task
- Quit talking and begin doing
The end from the beginning
Larry Wall, creator of the Perl programming languages, identified three virtues of programmers as laziness, impatience and hubris. He argued that these lead to more efficient code in the long run. My principal tutor at university was an internationally renowned composer of contemporary music. I clearly remember him telling me that one of his most significant vices was his laziness. He is now approaching his 90th birthday, but his list of published compositions is limited to about 60 – perhaps one work per year. Compare this to Mozart, who composed more than 600 in 30 years, or Telemann, who composed over 300 in 75 years.
My tutor explained that if he had just put pencil to paper without distraction, he might have been more prolific. Walt Disney didn't seem to suffer from this problem. Having thought about writing this article for a while, it’s ironic that it didn’t take too long once I began...