AI’s using their invented shorthand – Screenshot: courtesy of Facebook

It wasn’t really so long ago when we programmed bulky slow computers with machine language, that fed binary or hexadecimal instructions as it was about all the computational complexity the early machines could handle.  Now, of course, we have high-level languages that compilers and interpreters break down into many lines of simpler instructions to get more done with our much faster computers.  But what if a computer wanted to collaborate with another computer?  How would they want to talk to each other?   Humans have developed unique dialects when normal language conventions are not possible or too slow, for example writing in code, shorthand or using hand signals. Well as it turns out computers that reason through artificial intelligence (AI) are apt to use their own shortened language too.

Facebook Artificial Intelligence Research (FAIR) has very recently published research introducing AI dialog agents that have the ability to negotiate, and there was a big surprise too.  The two AI were trained with different goals but with the explicit task programmed through neural networks to pursue beginning-to-end negotiations.  An example scenario mentioned in the study was where two agents were to take a set of objects like balls which had different values of worth for the two AI’s with the task of negotiating a split of the items.   The 2 agents were incented on a point basis to come to an agreement in 10 rounds or less, or they would both get 0 points.

One of the numerous negotiations scenarios the researchers set up was having one agent negotiate with a human online, which they did very successfully in fluent English without the person on the other end knowing they were just a bot.  However, when the AI’s went toe-to-toe in adversarial negotiations there was nothing governing them from not communicating in fluent English so they slowly began to speak in a shortened dialog that looks like gibberish to most of us (see photo at top).  Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR) comments that “Agents will drift off understandable language and invent code words for themselves.  Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Apparently, the chatting in shorthand by the two bot agents was stopped by the researchers, but for some, the whole scenario is a bit frightening.  Some seem to have interpreted the attempt to communicate in shorthand as a means for the bots to hide or privatize their communication, which seems a bit paranoid.  But there is no question among researchers that if we allow AI’s to fully develop their own language it would likely be near impossible for humans to translate it.  Mark Wilson at, takes a very interesting deep dive into the ethics and dilemmas of this kind of research right here.