I’m going to apply to OpenAI by jean-francois dubeau

What the hell do you even do?

What the hell do you even do?

This isn’t a career plan. It’s not a position I expect to get. In fact, I have no idea what I could possibly be doing at a place like OpenAI. If I had any advice to give OpenAI it would be to not hire me.

What’s OpenAI? That’s an excellent question and one that OpenAI does a poor job of answering. Not that their mission is easy to explain. Have a look at their website and you’ll see what I mean. In a nutshell, it’s a non-profit organization with the goal of guiding the development of artificial intelligence to either steer it away from evolving into a Skynet-like anti-human entity or creating enough ‘good’ AIs so we’ll have allies in the upcoming robot wars.

There’s a lot of stupid in that statement. I’m not saying that the people behind OpenAI aren’t brilliant. This is a list of genius-level individual right there, but ideas motivated by fear have a habit of slipping into the shallow end of the intelligence pool. Simply; I don’t think that artificial intelligence is a threat to humanity. In fact, that’s a worry that I think is born out of too much media and not enough thinking about it.

Why the hell do I want to work for OpenAI then? Because while I don’t agree with their motivations and predictions for the future, I think they’re doing the right thing and going about it the right way. I really don’t care if Elon Musk builds himself an anti-robot bunker in the desert of California, the guy has earned the right to be a little eccentric if he’s so inclined. However, if you read my rant about old men being afraid of robots, you’ll know that I’m entirely in favour of guided development of artificial intelligence.

In a recent episode of Dan Carlin’s excellent Common Sense podcast, he introduces the idea of wartime efforts being applied in peace time situations (Episode 298 - Innovation Acceleration and Jab Defense). That is, Manhattan Project level investment of time, money and brain power focused on problems that have nothing to do with winning a war. OpenAI has the vague aroma of such a concept. Money is already being poured into the project, to the tune of 1 billion dollars. Money attracts talent and the organization provides focus.

While the motivation to create OpenAI seems a little ridiculous to me, I have no problem with the potential results. Unless the door remains closed to people like me, robot activists if you will, then there is every chance that OpenAI won’t just see a pooling of intellectual resources to further the development of AI, but also generate conversation about what artificial intelligence means for the human race outside of how fun it is to watch robots kill people.

Don’t underestimate how important this discussion is. This is a civil rights debate before the subject of the rights in question even exist. It’s the equivalent of Europeans having a conference before setting off to the New World to decide how they would treat the native people they might encounter there. It’s even more than that; these are the first syllables in the long conversation about how we, as a species, plan to interact with our first children.

So maybe I don’t have much to bring to the table when it comes to OpenAI, but I’m still going to apply and send them a very detailed account of what my skills are. I’d gladly move to California to work with people passionate about robots and the future. There’s no limit to what I could learn from that kind of environment. As for what I can give back? Well, smarter people than I will be able to look at what I can provide and if there is a place in their organization for a pro-robot voice, an unabashed optimist and aspiring science fiction writer, then they are much better equipped than I am to find it.

What could possibly go wrong?

What could possibly go wrong?

Consider this part of my application process OpenAI. I don’t have much to offer, but I’d be happy to be your inner robot-devil advocate.

I think a lot about robots as my upcoming book, The Life Engineered, will testify to.

JF