Responsibility: What We Offer the World With or Without AI

If you’ve ever been in a position of leadership scouting who the next leader or set of leaders should be, you’ve likely found it rather difficult to replace yourself or your peers. Not because you were irreplaceable, either. I mean, sure, you got where you did uniquely with a unique story (path dependence and all that), maybe you had a perfect storm of connections or combination of skills no one else can truly replicate. But that’s now why you’re hard to replace. After all, a unique blend from another human would change the trajectory, but not necessarily in a negative way.

You and your peers are hard to replace because few people truly wished to be in your shoes.

That has been my experience at least. Running Biomedical Engineering Society (BMES) at RPI taught me this deeply, and few opportunities since have revoked the lesson. Shortly after COVID, BMES found itself at its lowest, membership-wise and moral. Their prior president lugged the responsibility off to their VP at the time on a whim: stating something like ‘It’s your problem now’ after getting too much heat for being too busy to run their own club. Campus participation had dropped nearly 80% and their leadership followed close. As a result, once the students returned to campus, the small group had a reluctant but steadfast leader to try to bring the club up from the depths.

Two years later, I learned of the club’s existence and decided that I would help its ascent. Chiefly, by running the club myself. I came in ready for a competitive scene to race to the top, earn the respect of my peers, and make the best decisions I could for the club’s future. It came at quite the surprise when in my very first meeting with the club’s e-board, our president asked if anyone else at all wanted to take her role. She was promptly met with shifty eyes and silence.

The role, more than any other, invited the highest degree of responsibility. When the direction or what to do next became less clear, this role was to illuminate the path. When fingers are pointed for blame, the responsible figure is, well, responsible. It also meant you cannot truly clock out; there are no off-hours, only hours where you trust the process more readily. You’ll have times where your choices seem to go wrong no matter what you do, and times it feels so easy it’s like you’re somehow cheating someone or something. But despite being ‘in control’ you will often have very little.

At the time I had offered to take up the BMES presidential mantle, but considering my lack of experience and reputation, this was rightfully declined. I campaigned throughout that year and learned an awful lot about what it had meant to take the role and the responsibility it would demand of me. I learned ultimately what it meant to do so too: to provide the BME cohort with opportunities they didn’t know they could have or couldn’t without the club’s help. I took it very seriously.

Now what does this have to do with the world of AI? Well, soon enough the only jobs left will be those of higher and higher responsibility. Where decisions move mountains and accountability is at an all-time high, for everyone. This has been and will be our future, and no amount of singularities will change this.

Let’s imagine for a moment the hierarchies of companies. The ‘lowest’ worker clocks out 9-5. They are compensated for their time and skill, and are told what to work on. The ‘highest’ is paid directly on company success. Their hours are not counted and what to do is entirely up to them and their board. Everyone in between does a mix. Contract, salaried, flexible hours and equity compensation with higher degrees of decisionmaking power but with bosses of their own. They are shielded publicly more often than not, and thus are only held responsible internally. The ‘highest’ role gains no such shield, and is scrutinized even for decisions they did not make (and often praised too).

This is the reality of responsibility. High highs, low lows. The more one’s willing to take on, the more risk one takes inherently. To have confidence you’ll make the best decisions absent of all the information, that you will survive when you are knee-deep in it (even when it’s not your fault), and that you’ll be humble enough to pass the praise along when it does come. This is not to say that the ‘lower’ roles have no decisionmaking, but what they are deciding is incredibly different.

In the world of artificial intelligence capable of reliably achieving any and all tasks they are told to accomplish, what happens is that they are essentially given the ability to make greater choices. When you ask ChatGPT to write you an algorithm, you are giving it the ability to decide how to do so. Something as ‘simple’ as a calculator has many, many choices:

  • Which algorithm to use for addition: ripple-carry, lookahead-carry, or table lookup?
  • How to represent numbers: binary, decimal-coded binary (BCD), or floating point?
  • When rounding results, which mode: round-to-nearest, truncate, bankers rounding?
  • How to handle exceptional inputs like division by zero or overflow (error flag vs. wraparound)?

The list goes on. The equivalent of the 9-5 role, the AI makes these decisions for you so you can accomplish that higher goal above the trees. These details matter not to the higher management, only that it works well with the goals they’ve set. It is not unreasonable to suggest that the ability to decide whether to build a calculator is an entirely different corpus of skills than all of the mathematical and computational complexity building one could require.

This matters because as the capacity for artificial intelligence broadens, so too do the roles they are allowed to make decisions on. And the ones left behind require higher and higher responsibility to be taken up: the very thing many of us have avoided or been disallowed for much of our careers.

If I attempted to list the things we humans do to find a place in society, it would look a bit like this:

  • Innovators and researchers, driven by curiosity
  • Artists and other creative producers
  • Social connectors and influencers
  • Managers and organizers of people and systems
  • Visionaries willing to take risks and shoulder responsibility
  • Specialists with replicable, reliable, transferable skills

These abilities all have a place in our society, but with competent AI, one of them is critically challenged and constricted. Why? Because of each word that makes up the role:

  • Replicable means reproducible, trainable.
  • Reliable suggests consistency, done at scale and without concern for error. AI do not sleep, and with proper systems will rarely error.
  • Transferable suggests knowledge to be ascertained, discovered and written. Such formats are ideal for an AI.

But there is one critical one not mentioned about this role, something critical that each other struggles less with or do not require as a consideration: Low scale.

The world’s greatest engineer does not build ChatGPT. A team of them do. Even if Mark Zuckerberg created Facebook, he most certainly did not and could not build it. By the time he did, it would have been long irrelevant. It’s because being the world’s greatest engineer is low scale. Important, yes. Worthy of very high pay, sure. And most definitely critical for any company’s success. But it is also at risk for AI, by its very nature.

This leaves the specialist vulnerable. Researchers create new fields, artists create meaning, visionaries create companies, managers create teams. But specialists only perfect what already exists. Their work is vital, yes, but also low-scale and easily codified, which makes it the first to fall to automation. High-impact roles are messy, subjective, and irregular. They demand judgment, not skill, and they always require someone human to blame when things go wrong. No amount of artificial intelligence will erase that human, societal need.

This is why the “singularity” is a myth of convenience. Even if machines become superhuman at every skill, someone still decides what to build, when to use it, and why. Those decisions move mountains; no one smart delegates mountains to an algorithm. And by the time they do, it will be because we are making planetary decisions. This has been the case since we conceived of societies, and will continue to be the case until we become too dangerous to coexist with.

What many consider the smartest among us often struggle on what to do, rather than how to do it. How to do it is a week of caffeine-filled nights and weekdays of grinding. Our strongest specialists are considered geniuses, but ultimately within their box. That’s why many pet projects from our brightest rarely amount to much. What to do can take months of toiling over information, subjective philosophies, conflicting visions, committees and negotiation. It’s higher up on the pole of responsibility and demands consideration as opposed to perfection. The AI of tomorrow won’t be perfect, but they’ll most certainly be enough.

My BMES presidency was one of the hardest things I did at that school. It wasn’t thermodynamics or intro to machine learning, but taking responsibility. And it wasn’t because I lacked intelligence (I hope). When the time comes, that BMES president will ask the same of you, and if you avert your hand, you may find your peers increasingly look more artificial.