On Forgiveness in UX Design

As engineers and designers, we need to focus on building products that have empathy and forgiveness for their users. 

Software is eating the world, but as it optimizes for engagement and retention, it leaves behind confused and exhausted users. 

Companies raise millions of dollars at billion-dollar valuations. With those valuations comes a drive to add new features. With the move to SaaS for everything, user interfaces and modes of interaction seem to change overnight.

Perhaps we could take inspiration from the consumer packaged goods industry. 

As a new father, I have changed diapers in various circumstances. In the dark, in the park, trying to mitigate a full-on meltdown and sometimes just trying to stem an avalanche of 💩. 

And yet, the diaper works as intended. Forgiveness is built into the design. I can operate it one-handed if I have to, and it gives some protection even when not used correctly. I can be confident that the design won’t change dramatically in the next iteration.

So, dear UX designer, next time you fire up Figma, think of the humble diaper, and a poor sleep-deprived dad dealing with a poop 🌋 at 3am. 

Think of the mistakes a user may make and design your application to forgive them and not punish them when they make those mistakes when addled, distracted, or simply exhausted.

Digital Transformation

Benedict Evans, as usual, provides an insightful view of the “Digital Transformation” story. While crypto, machine learning, NFTs, and drones may generate the most headlines, we are also in the midst of a generational shift in how we do business. This shift is happening in boring corners of the B2B Enterprise software market but will have an impact bigger than some of the other, more alluring, technology trends.


Companies like UiPath (process automation) have successfully targeted the dull areas of enterprise software that are ripe for automation and streamlining. The software in “software eats the world” may include headline-grabbing items such as machine learning and distributed ledgers. But, much more significant changes are being brought about by the adoption of SaaS applications and workflows. Twenty years ago, you couldn’t add a new software application without going through procurement and IT. Now, all you need is a corporate credit card.

Demographic changes and shocks such as Covid are only accelerating the technology megatrends for which “Digital Transformation” is a catch-all term.

The concept of “generational shift” also works on multiple levels. Prolonged and painful migration projects can last for a generation (or longer). But we also have an entire generation of programmers and systems administrators who are now retiring, and you can’t find the talent to keep workhorse systems going. 


I am thinking about the second-order consequences of this shift to a software-first world. There is going to be more efficiency, more competition, and a chance for aggressive upstarts to ride the technology wave and displace (rather than disrupt) less agile incumbents. But, we will also have a generational loss of knowledge that cannot really be replaced by software.

As every aspect of our economy is driven by software, we start seeing some of the characteristics of software show up. We have seen shortages of food, supply chain issues across industries as well as an incredible increase in ransomware attacks as overly optimized systems break under unexpected disruptions like the Covid pandemic, or insecure systems are targeted by malicious actors.

So, while I think this trend towards more Digital Transformation is good – in aggregate; there are also serious consequences that we may not being much attention to as we continue to be driven by software to optimize.

On Modern Web Development

Tom MacWright wrote a much discussed article on the state of modern web development a couple of weeks ago.

He states that the current default of building a Web Application as a SPA using React, or something similar, in the frontend and an API on the backend is overkill.

This is both an opinionated survey on the current state of web development and an extremely contrarian take on current accepted wisdom on how to build modern web applications. It’s worth a read.


His premise is that we are adding additional levels of abstraction such as Virtual DOMs, Server Side Rendering, Bundle Splitting, etc. He even goes on to say that trying to generic, purely REST-ful APIs is a bit like tilting at windmills, since we end up having them tightly coupled with frontend code anyway.

I am sympathetic to this point of view — mainly because I am rapidly transiting to “grumpy old man” phase of my career, and I find modern web development workflow terrifying complex. I had to work on a React application a couple of years ago. It made me want to run back into the warm embrace of Java Swing development (oh beautiful Java Swing).

The pure beauty of Java Swing

But, I also want to note that software engineering is pretty fashion driven where we bounce around between too much and too little abstraction. Every generation has this battle — Java didn’t have enough features so we ended up with Java EE, Java EE was too bloated so we ended up with Spring, the Java language was too complex so we built Go. Go is missing Generics so we will go ahead and add them. The pendulum keeps swinging and we have a complexity crusade every 5–7 years or so. Unsurprisingly, it has been just about 5 years since React has become the defacto way of building web applications. Time for the backlash!

I am sure we are going to be back at folks writing pure HTML and FTP-ing to an Apache server day now 😉

BioWare & Anthem: A Cautionary Tale

I love playing video games. My favourite game series is called Mass Effectmade by a game studio called BioWare.

I am a huge fan of BioWare’s games and I was looking forward to trying out their new game Anthem.

Anthem came out in early 2019 to universally poor reviews. The game was half finished, had poor gameplay and did not respect the player’s time. I ended up giving it a miss.

This week, Kotaku’s Jason Schreir has an excellent article about what went wrong with Anthem. It turns out Anthem’s development was troubled, and the development team faced the same sort of problems (some self-inflicted) that I have seen a few times in my career. I suggest you take some time to read the excellent article linked above. It is a wonderfully researched and well-written piece.


So what went wrong? Schreir’s article talks about game studio politics, financial pressures and many other contributed factors. I want to focus on what went wrong with Anthem as a software project. Here are some of my thoughts on this cautionary tale that I think may apply to many software development teams.


Building a product for a market you don’t understand

BioWare is famous for making role-playing games (also know as RPGs). The core team for Anthem had made well-loved games such as Mass Effect — single player, immersive, story-driven experiences. They were asked, by their owner EA, to build a multi-player, online, loot-driven shooter such as Destiny.

These games made money through micro-transactions; charging players small amounts of money to buy in-game items. The Anthem team did not understand the fundamentals of the product they were being asked to create. Basic mechanisms such as loot drops (where players get rare items for completing missions) did not work well. YouTube is full of players ranting about Anthem’s lack of loot.


Not being clear about the scope of the product

The Anthem lead team were slow in making decisions about critical features. Flying: a fundamental gameplay feature was added and removed multiple times. The team was only aware that flying was in scope following a demo to an EA executive who decreed that the game must allow it. This is like building a desktop app and not being sure whether mouse input should be enabled. It makes things difficult for the design and development teams.


Being forced to use a technology stack by decree

Most modern games are built using game engines such as Unity or Unreal Engine. EA mandated that all its games, including Anthem, must be built using the in-house Frostbite engine. They wanted to cut costs and re-use resources by not having to pay a licensing fee. Apparently, the Frostbite engine, while used successfully in games like the Battlefield series, was poorly documented and not suited to a game like Anthem. The development team struggled to make it work, and productivity suffered.

The same team that had made the Mass Effect series using the Unreal engine.


Shipping before being ready

A YouTube reviewer I follow mentioned that the game played as if the developers hadn’t even bothered playtesting their own game. The article talks about unstable builds and unavailable test environments. There was no time left for a thorough quality assurance process. When the game did come out, players complained about interminable load times and repetitive missions. There were also hard crashes and un-usable inventory systems: a big problem for a looter shooter.


In conclusion..

Anthem is a failure and has resulted in Bioware losing credibility as a game studio. I think it failed because it was a poorly managed software project. It was a poorly thought out product released in a broken state by a confused and stressed development team. This pattern is not unique just to the video game industry. I am sure software engineers of a certain vintage have seen this play out in a variety of industries.

4 Waves of AI – And why they matter

I can’t open a newspaper or visit my friendly local bookstore without coming across a think piece about why AI is a *BIG DEAL* and how it changes everything. The tone of most of the material that I have come across is aptly summed up in this classic xkcd panel.


Classic xkcd panel on AI

In January 2019, I read Kai-Fu Lee’s fantastic book “AI Super-Powers: China, Silicon Valley, and The New World Order.” Mr. Lee is a thoughtful, even-handed guide to what is going on in the field of Artificial Intelligence (specifically Machine Learning) and how it may impact our future. The book is also an eye-opening account of the Chinese startup eco-system — but perhaps more on that another day.

Early in the book, Mr. Lee talks about how the spread of AI is happening in four waves. These waves are:

  1. Internet AI
  2. Business AI
  3. Perception AI
  4. Autonomous AI

Let’s take quick a look at each of these waves.


Internet AI

We deal with Internet AI every time we shop online, scroll through our social media feeds or Google something. From AI Superpowers:

Internet AI is mainly about using AI algorithms as recommendation engines: systems that learn our personal preferences and then serve up content hand-picked for us.

Examples of Internet AI include online advertising optimization, personalized news feeds, and algorithmic content recommendation.


Business AI

Advances in machine learning have allowed businesses to take advantage of labeled, structured data that resides in data repositories and train algorithms to outperform humans on clearly defined optimization tasks. Some examples here include automated credit scoring, fraud detection, algorithmic trading, and supply chain optimization. While not the most exciting topic, in the short term, Business AI has the potential to have a significant impact in the way we work and more potently, what *types of work* make sense to automate.

Business AI is about optimising and generating value from structured data.

Business AI has the potential to make what were once stable professions like accountancy, insurance, and medicine obsolete in their current form. It also has the potential to generate vast and lucrative new opportunities. More on this later.


Perception AI

Perception AI is about the “Digitisation of the physical world.” It is about using real-world data captured from IoT devices, cameras, smartphones, and other devices to blur the lines between the online and offline worlds. We already see applications of facial recognition and machine translation technology enhance offline experiences such as shopping and travel as well as enrich experiences such as education.

Perception AI is about blurring the lines between the online and offline world

Augmented reality (AR) devices and applications increase merging of the offline and online world. Perception AI also has worrying implications around surveillance, privacy and data protection.


Autonomous AI

Autonomous AI represents the culmination of the three preceding waves of AI. What was once science fiction is slowly becoming mundane. Autonomous AI is about fusing the ability to optimize from extremely complex datasets and integrate them with powerful sensory abilities resulting in machines that can understand and shape the world around them.

Autonomous AI results in machines that can understand and shape the world around them.

We already see some limited applications of Autonomous AI in the fields of self-driving cars, automated factories and pollinators.


What does it all mean?

Ben Evans, a partner at the storied VC firm Andreessen Horowitz, talks a little about the implications of advances in AI in the November 2018 presentation “The End of the Beginning”. He says:

“Tech is building different kinds of businesses, and so will take different shares of that opportunity, but more importantly change what those industries look like.“

He says further that a combination of high internet penetration, changing consumer expectations and a general “unbundling” of supply chains are creating business models that in turn are enabled and accelerated by AI. The breaking apart of tightly coupled logistics supply chains is just one example of this phenomenon.

At my work with Jeavio’s portfolio companies, I can already see this in action. We support entrepreneurs who are working in diverse fields such as customer experience analytics, construction and high tech agriculture. In each of these various fields, we see applications of Business AI that have the potential to disrupt existing models and generate tremendous value.

In my previous career working in high-frequency algorithmic trading, I have seen technology disrupt financial markets. Advances in AI are now doing the same in a wide variety of fields.

While AI cannot by itself generate new business models, it is already a potent force multiplier, which when deployed effectively, can increase efficiency and help businesses capture more value. We may not worry about our Robot Overlords just yet; we should keep an eye on the disruption and opportunities presented by the four waves of AI.

Do you know your dependencies?

A contributor on GitHub finds an abandoned, but popular JS library and commits code that targets a Bitcoin wallet made by a particular company. Hundreds of other libraries use this library making this vulnerability affect thousands of applications since it is a transitive dependency.


Photo by Bryson Hammer on Unsplash

NPM (and npmjs.com) provide a valuable service in hosting JavaScript dependencies. By blindly upgrading to latest version of libraries, developers can open themselves to malicious attacks similar to those described below.

I would recommend developers understand how npm’s package lock mechanism works. This will ensure that your dependencies are reproducible and force the use of known and trusted modules instead of downloading the latest version.

This is not a problem just with the JavaScript eco-system. Python (via pip or conda) and Java (via maven & gradle) have similar issues. However my, totally subjective and un-scientific, observation is that JavaScript libraries tend to have way more dependencies (see the “left-pad” debacle for example)..

Ars Technica has a good write up about this particular issue: https://arstechnica.com/information-technology/2018/11/hacker-backdoors-widely-used-open-source-software-to-steal-bitcoin/

13 years of mistakes and what I learnt from them..

In 2004, I graduated from University with a degree in Computer Science and a graduate job at a large investment bank. In the years since, I have worked as a technologist in some of the world’s biggest banks and as of 2017, at a startup in London.

I have had many titles: business analyst, project manager, team lead; but I have always written code.

Earlier this week, I came across this thread on Hacker News. It is an “Ask HN” thread: “What habits make a programmer great?”. Go ahead and read it, it has some great advice.

It got me thinking about what I could add to that discussion. I am not a great programmer, but I get things done and I have had many years of mistakes to learn from.

So here are my 2p..


You will get frustrated, and it’s OK

Our field worships productivity. We hear about amazing applications built over a weekend. There is always someone who gets things done faster.

The truth is that programming is hard, sometimes lonely and often frustrating. Some of us work long hours, face challenging deadlines and difficult clients. Getting stuff done can be a long and arduous grind.

Getting angry and frustrated is perfectly normal. However, if not managed properly, it will lead to burnout and disillusionment.

Find out what coping strategies works best for you. When I feel like things are getting a bit too much for me, I go for a walk. Others grab a coffee and vent to a friend, some even talk to rubber ducks..

Be kind to yourself. Programming is mentally taxing, and your effectiveness depends on a number of factors that may be outside your control. Some days will suck, and you will feel like you got nothing done. It’s OK.


Know and love your tools

Our field moves fast, and every day there is a new tool, package manager or framework that will solve all your problems and make you a 10X programmer. This is a trap.

Take IDEs for example. I use IntellIJ IDEA for Java, Python, and JavaScript development. It is probably the most complicated application on my computer. It has taken me years to just get to the point where I can navigate the interface, understand the capabilities and am productive using the IDE. If I switched to something else, my productivity would plummet immediately.

You have already sunk hours into learning the tools you are using currently. Think of the return on investment (ROI) for all the time you have spent before moving on. Learn and love the tools that you use, and strive for max ROI.

Tools and frameworks come and go, but time moves only in one direction.


Don’t dive right in

Go on, fire up that IDE, you know you want to..

I am biased towards action. Given a choice between just getting started on doing something (anything..) or spending a few minutes thinking and planning, I tend to choose action over contemplation. I see this same tendency in many other programmers.

Diving right in feels good, feels productive, however I have learnt that this is merely an illusion. This approach has let me to dead ends, wasted time and frustration.

When faced with a feature to build or a problem to solve, I ask myself these questions:

  • What is the goal?
  • Will doing this move things (the product, project or company) forward?
  • How does this problem relate to what I know already? Do I know enough?

Once I have processed the problem, I work on outlining a solution. This can be a doodle, a user story or sometimes just a bunch of comments in an IDE. This exercise makes writing code the easy part of the whole process.

“Start projects by measuring the current state of the world. This goes against our engineering instincts to start fixing things. But when you measure the baseline, you will actually know whether you are fixing things.”

Kent Beck

Refactor vs rewrite is a false dilemma

We should rewrite everything in Elixir!

Everyone comes across a codebase that is so convoluted, so messy, so gnarly that the first instinct is to just want to bin it and start all over again. In some cases, this may be the right thing to do, for example if your client is willing to pay for a feature that would not be possible to implement given the current codebase.

However, in most cases looking at ways to clean up the legacy codebase by refactoring it is the more productive approach. Modern IDEs and code coverage tools make it easy to refactor a large codebase. This combined with a pragmatic approach to unit testing will take you a long way towards quickly and safely delivering value.

Refactor vs. rewrite is not a binary choice. It is a continuum that is heavily biased towards refactoring.

I have and have been a team member of numerous projects that were ground up rewrites. Most were abject failures. We spent weeks (sometime months) delivering functionality that already existed. By the time we got to adding value, the projects was behind schedule and over budget.


Don’t ignore the context

Then there was the grim reaper of Git, and then there was the koda of Kotlin, and then we decided to write everything in Haskell..

Nobody wants to write terrible code. The messy codebase you inherited is the result of battles wons and lessons learnt. It may give you a headache, but there is always something to learn.

It is worth spending some time to understand what problems the codebase is trying to solve and why it ended up the way it did. In most cases, it is the legacy codebase that is being actively used by your customers and paying your salary.

By ignoring the “why” and focusing on “how”, we are doomed to repeat the mistakes of the past. If the codebase is messy and broken, try to understand what caused that situation and try not to repeat the same mistakes or adopt the same patterns.


In conclusion

I have been part of more projects that have been qualified successes and unqualified disasters than those that have been unqualified successes. I am a mediocre programmer, but I keep striving to learn from my mistakes and make small improvements everyday.

What have you learnt from your mistakes?

Review: The Docker Book by James Turnbull

It is hard to avoid Docker. Hacker News has been abuzz with it for years, The Register ran an exhaustive feature about it a couple of months ago; indeed it seems to have taken over the DevOps world.

But why do I care? I am a developer, I live in the land of abstractions. The JVM is as low as I go my friends.

The problem is, all developers need to do releases.. And releases have a tendency to go very wrong..

Docker_Releases

So, I decided to educate myself. What is it about Docker that has got the cool kids on Hacker News all excited?

I took a look at some Youtube videos, tried out the tutorials and read a handful of blog posts and how-tos on Docker. I just couldn’t get my head around it! Finally I took the plunge and spent the last couple of weeks working through James Turnbull’s The Docker Book.

So, am I enlightened? The short answer is – yes, I have enjoyed working my way through the “Docker Book” and I have a much better idea on how to use Docker and the sort of use cases it is designed for.

The book is written in a tutorial format. We start with the basics about Docker and containers and move on to installing Docker on your favoured Linux(1) distribution.

Once we have Docker up and running, we learn about the basics of Docker. How containers can be created from images and how these images can layered. We learn about the Docker repository can be used to download standard images (for example, the image for ubuntu:14.04 can be used to build a base container that runs Ubuntu 14.04 LTS) and how to build containers from the images that we define. The author walks us through setting up and managing some simple containers.

All the Dockerfiles and any scripts and code used in the examples is readily available from the Github repository that the author has setup for the book(2).

I suspect most readers will get the most value out of chapters 6 and 7 of the book. Here the author goes through some examples including:

  • Using Docker to build a test environment
  • Building a continous integration pipeline using Jenkins and Docker
  • Building a web application that is deployed on multiple containers

These examples are quite detailed and well designed. Most of them could be used as a basis for a Docker based application stack “in the real world”.

Chapter 8 explores the eco-system(3) that is being build up around Docker focusing on service discovery with Consul and orchestration with Fig.

The book concludes with chapters on the Docker API and how Docker can be extended.

“The Docker Book” does not go into details on how containers work beyond the introductory chapters. The focus of the book is about learning what you can do with Docker and it succeeds admirably. I deducted half a star from the review simply because the author does not delve much into things like performance implications of using Docker or on how exactly the operating system may allocate resources to applications running in containers. There are plenty of resources online on these topics4.

You can’t go wrong with “The Docker Book” if you are looking for a hand-on introduction to Docker. James Turnbull is a good tutor and the resources accompanying the book are great.

Will Docker solve my release woes? Is it actually ready to be deployed in a corporate setting? Perhaps a topic for another post..

My Rating: 4.5 out of 5

Notes:

  1. Instructions for use of Docker on Windows and MacOSX are provided but are skeletal. Basically you need to use Boot2Docker
  2. I worked through almost every single example from the Kindle edition and didn’t find a buggy script or typo!
  3. The eco-system is moving fast. Kubernetes from Google is also worth checking out.
  4. The Docker blog is excellent

Review: The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim et. al

The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business WinThe Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win by Gene Kim
My rating: 4 of 5 stars

The “Phoenix Project..” is a parable about technology, business and an introduction to the hot new buzzword of the day – “DevOps”. We follow Bill as he is promoted to be the head of IT Operations in the fictitious auto parts company “Parts Unlimited”. Bill is in for a rude shock as he leaves his comfortable middle management job behind and is thrust into the world of corporate politics, disastrous projects and a company rapidly falling behind it’s competitors and losing market share and money. The root of the problem appears to be a dysfunctional technology group and a complete breakdown in communication between the business and technology groups within the company.

The issues explored here will be familiar to anyone who has worked within technology in any sort of corporate setting. The challenges Bill faces – unclear requirements, unrealistic expectations and ever tightening budget constraints – are present everywhere. The book focuses on technology operations and we go on a journey with Bill as he tries to institute a change management procedure, keep control of production environments and tries to balance key staff who seem to spend most of their times fighting fires instead of delivering on projects.

The methods and technologies Bill and his team adapt should also be familiar to most IT folks. We have change management procedures (ITIL), Kanban boards, and continuous delivery methods. Gene Kim et. al do an excellent job of explaining how these methods work and go beyond the buzzwords in showing how these can be effectively used. The situation Bill inherits at Parts Unlimited may be extreme but its not too far off the mark.

I strongly recommend the Phoenix Project to anyone who works in technology in any domain. As a developer, I don’t have much insight or indeed interest in how IT operations work and the sort of challenges they face. This book forced me to think more about why we have change management procedures and how operating and maintaining an IT infrastructure is as (and probably more) challenging than building the applications that run on that infrastructure.

I have deducted a star simply because the writing can be clunky in parts and I feel the book would have benefited from more editing. The characters are caricatures of the type of personalities you find in most corporate settings. It can be a bit much at times (like the binder carrying manic depressive CISO), but it didn’t detract from an informative and engaging book

View all my reviews

The WhatsApp acquisition

The water-cooler was abuzz this morning with news of Facebook’s $19 billion acquisition of WhatsApp, a tiny company. With the claimed 400 million users that WhatsApp brings to Facebook, the numbers involve value each user at $40. That is an astonishing amount of money for a service that is monetised through application sales, not via advertisement. There have been a number of articles and blog posts online analysing this deal. This is not one of them..

My colleagues are a quiet and taciturn lot. Office banter is limited to a “Good Morning” and a “See you later..” outside of the lunch hour. For the first time, in my admittedly short stint here, we had a bonafide conversation that was not even tangentially related to trading systems and market data feeds. We got talking about what it means to be a programmer working outside of the startup / silicon valley scene. One of my colleagues remarked that he spent half a decade in further education and a lot longer learning the ropes until he got to the point now where he is comfortable and financially secure. He wondered if that time would have been better spent writing a new chat or social network. Perhaps a new way of optimising the transmission and sharing of ribald jokes, or for improving the sexting workflow.

We carried on in a similar vein for a while when the most introverted of our lot spoke up. He said: “I was just never interested. The thought of building the next Facebook or Twitter just doesn’t excite me. It was never something that was on my radar.”

I spend way too much time on Hacker News. The Silicon Valley culture and eco-system fascinates me, but it does not inspire me. I marvel at the numbers that are thrown around. A few billion here, a few billion there, but I also wonder about the utility of it all. It is now fashionable to talk about how much of a talent drain banking has become. How so many people left promising careers in academia and engineering to cut code and make money on Wall Street and the City. In a few years I can see people talking in similar terms about Silicon Valley. “He was a promising scientist, but he joined Google to help them optimise the placement of adverts on search results.”

I find the earnest tone of discussions on Hacker News and of the job postings for these start ups deeply ironic. They talk about changing the world, wanting rockstars and working on cool new technologies. Yet, the end goal is a big payout via IPO or acquisition having built a better way of sharing food selfies. I think these headline acquisitions are a honey trap for programmers. Somebody, like my colleague, who wouldn’t really even think about working for a startup building a “trivial” app might realise that the App may be a gateway to that long dreamt of retirement on the beach.. You might get a lot more people ready to work for peanuts with the hope of striking it rich one day. Perhaps it is not a colossal waste of money after all..