AI, Augmented Reality, and How to Avoid a Craptastic Future

A short exercise on predicting or recommending where AI will take marketing in the next two years.

There are three areas that I see AI, either already occupying or being rapidly picked up:

Audio Content Recognition

Connected heavily with augmented reality, and more generally with mobile, we’re already using audio fingerprinting, a branch of weak AI, to permit fuzzy identification of audio content. This has been around for a long time in the form of apps like Shazam and Soundhound, connecting music heard over the radio or tv with identification, lyrics, and the ability to purchase – but I’m seeing ever-growing applications to migrate from this relatively passive approach to one where recognition is used to improve the conversion rate of all sorts of traditional advertising by giving the audience a shortcut to engagement by detecting brand messaging in their environment and providing a simple way to answer the call to action.

Do something with all that Big Data

Thanks to market research, social media, public databases, and so on, we have more information about audiences than ever. The tendency is to think that the always-on future of screens is an opportunity to fill the public’s visual field with ever-present marketing messages. AI lets us avoid that misstep by using our these massive datasets to anticipate who will respond and when, by building models of interest and response, and being helpful rather than intrusive.

Fuzzy Brand Recognition and Product Association

2018 will see a spike in apps for mobile, and the year following in digital eyewear, that are able to confidently identify products in the user’s environment. Experimental and novelty AR has previously been based in recognition of specific targets, but Apple’s MLKit sent the message that AR recognition is all about machine learning. With this tool, we can connect marketing efforts to lifestyle like never before, being a consumers assistant instead of just another ad to close.

Elon Musk, Artificial Intelligence, and Whether We Should or Should Not Fear the Reaper

Elon Musk’s tweets on AI.

We’ve all seen them.

I guess I could start by saying that in general, I’m a big fan of Elon’s perspective and ingenuity. That feeling is echoed pretty much universally by everyone I work with and around – colleagues from my particular segment of Hollywood have gone to work at Tesla and SpaceX because our skillsets are typically applicable there as well (especially in the realm of simulations, and, to a lesser degree, AI).

I can also lead off by saying that Artificial Intelligence is incredibly risky – though not, I don’t believe, in a “they’ll join together and create SkyNet and kill us all!” sort of way.

I’ll provide a few stages of information here – and answer any questions in the comments. This is just a quick overview – and it’s easily worth an entire book, not just a short article. I’ll cover the different types of AI (because this is important to understanding the risk), the risks that I believe Elon is most concerned with, and what the perspective of other people in the marketing, branding and entertainment space seems to be.


First, we need to understand what AI is – and what Musk is referring to. There are two (or maybe three) categories of Artificial Intelligence:

  1. Weak AI – We already have this, we use it extensively, and unless you live in a cave, your existence is already being almost constantly affected by the activity of Weak AI. Weak AI has gotten much, much better just in the last couple of years thanks to advances in storage, data transmission rates, the availability of massive datasets, processor speed, and the development of deep neural networks made possible primarily by new algorithms being developed that can run on modern, high end graphics cards.

    Weak AI does not perform independent thought – it can make startling realizations about huge, multidimensional datasets that humans may be incapable of finding, or perform typically human tasks faster than we’re able to, but there’s no direct path from “very good Weak AI” to sentience and self-direction.

  2. Strong AI – Think of Strong AI as a chatbot that’s no longer just pretending to understand you. Strong AI is sentient – it’s conscious in much the same way that we are. This isn’t something we have yet – but Strong AI isn’t out of reach. The concept of Strong AI doesn’t necessarily require self-direction or independence of thought, you could certainly imagine a Strong AI with self-awareness but analytical abilities only within a fairly narrow range. An Air Traffic Controller, AI, for instance, that’s able to make moral judgments in split-second emergencies but which isn’t likely to start working on a novel in its spare time.

    That’s not necessarily that different from what many of us humans are like. We’re not all equally inclined to be neurosurgeons and auto mechanics and tv producers. We’re conscious and we’re mostly good at one or two things.

  3. Artificial General Intelligence or AGI – This is often grouped with Strong AI, but I’ve chosen to describe it separately. AGI is that last step from “this is like us” to “this is better than us” – not just faster or with access to more information, but with absolute, inexhaustible capability to apply human or superhuman intellect to any problem, rather than a specific task set.


Now that we’ve covered the background, I can address the specific dangers which I believe Elon is issuing cautionary tales about.

  1. Massive employment market upheaval. Weak AI, in its current form, is already coming for millions of jobs. Some image recognition algorithms and implementations are faster and more accurate than human vision. Self-driving cars are already on-par with human drivers, and because of the network effect of sharing driving experience between all of those vehicles, all those self-driving cars become better. AI-driven robots like the Boston Dynamics Atlas are steadily homing in on the types of warehouse jobs that aren’t already being automated: the actual picking up, or placing, of boxes on shelves. For every funny video of Atlas knocking over shelves or falling down stairs, there are countless hours spent by hundreds of engineers and still more training of its neural nets to ensure it does better next time.

    Jobs which do not require a significant degree of legitimate autonomy are all at risk, here. If you can be written up for being five minutes late, that job is gone in 5-10 years time. If your job involves taking an instruction and carrying it out without autonomous decisionmaking or personal, face-to-face interaction with emotionally-inclined clients, there is enormous risk to it being phased out.

    New technologies always create new jobs – recently, the argument against that in this case is that we’re not just replacing labor but otherwise uniquely human behaviors. I don’t believe that the job displacement due to Weak AI is going to create long-term unemployment, by itself. What it’s going to do, though, is create relatively sudden and massive unemployment across a series of industries, initially among people for whom “just go back to school and learn a new trade” isn’t going to be an option, financially.

    It’s perhaps obvious to say that the people who are most confident that there’s no massive unemployment risk from are the ones who are not immediately subject to being replaced.

  2. Strong AI is likely to exacerbate the above – Strong AI is where we will likely see the follow-on wave of positions replaced: positions that required more nuance, that may require ethical or moral judgements or a sense of consciousness. In the medical field, Weak AI will include some diagnostician roles, examining medical test results and MRI scans and such, while Strong AI will begin to take on other more advanced tasks requiring fuzzier judgment calls that we’d only trust to a nurse or physician today.

    Sales positions – where we expect a marketer to think on their feet – will fall to Strong AI. Management roles where teams of workers were largely displaced by AI will find their managers are no longer needed.

    This is the phase where our ability to recover from #1 is threatened – #1 puts the workforce on the ropes, struggling to retrain and adapt, and #2 comes along right behind – likely well before we’ve recovered from #1.

    Even so, I believe this phase is *still* recoverable. We’re going to see huge shifts in society, the nature of our entire economy may change, but since we see the marginal cost of nearly everything approach zero, even the most skeptical of Keynesian economists would probably anticipate it balancing out eventually.

    The biggest risk here is the decade or two while we sorted that out. This is a very big concern. This isn’t “remember how bad unemployment got back in 2009” bad – this is on a whole new level not seen in decades, perhaps not since the birth of Capitalism.

  3. Artificial General Intelligence. This is the one that I think haunts Elon and keeps him up at night, staring at the ceiling in the dark.

    It’s not that AGI is going to release poison gas and kill us all – it’s that it is the greatest threat to what we need more than jobs: purpose.

    While an AGI might be developed that matches the best human minds at tackling a wide range of problems at will, the more sinister side is that it will, in short order and following normal product upgrade cycles, become monumentally *better* than we are at doing fundamentally human activities.

    AGI poses an existential risk to humanity – not so much to our lives but to the reason that we live them. Some people may be happy living under the care of machines of loving grace, but many of us want to accomplish things. We want to build things. We want to create, get better, excel…  With the development of Artificial General Intelligence, those activities are unnecessary. Where Weak AI replaced the fry cook and truck driver, and Strong AI replaced the Shift Manager and Air Traffic Controller, AGI replaces the Entrepreneur, the research scientist, and the playwright.


For the most part, marketing  & branding, and even the associated tech workers in these industries, are oblivious to AI except as a Science Fiction threat. Our partners and clients reach everything from mobile tech to oil companies – and they’re all, universally, deeply involved in AI. We’re touching it all over the place – WeakAI for the most part – to gain insight on user/customer behavior, using it to analyze why some advertising gets higher conversion rates than others, or to pick the right song to play next on a playlist.

At Mirada, we have a content discovery and distribution platform that’s based around a host of technologies including machine vision and automatic audio content recognition. We’re applying machine learning to a visual effects intensive project right now – but instead of displacing workers, it’s allowing us to do work that wouldn’t have been possible, otherwise. But this is all Weak AI.

Our clients also are almost limitlessly demanding – there’s literally no point at which we would deliver most projects that the client wouldn’t want a little bit more if their budget allowed it. Even the introduction of Strong AI seems unlikely to displace many (any?) people in our organization – simply because it would enable us to put that much more effort into the project. Our consumer-facing products would predominantly use AI to assist the customer in ways that weren’t practical to employ people for previously.


We’re in for a rough decade ahead – barring some other, greater societal upheaval, we’re facing some devastating unemployment starting at the tiers of society that are most vulnerable. At the same time, however, we’re not helpless against it – an economy where the cost of basic goods and services is largely reduced to the cost of electricity is one that can support a large number of less-engaged workers. People may find value in the numerous other activities we engage in – participating in society, local government, volunteer work, the arts – it’s quite possible that the arts community may simply never be won over by AI artists. As a society, we may decide to place even more emphasis on sporting activities and fitness – taking refuge in our innate biology.

What I’m doing

Much of the public-facing work we’re doing at Mirada, and nearly everything that I’m involved in collaboratively via my thinktank “Big Blue Celing”, are responsive to my particular, unique definition of “augmented reality” – it’s not about augmenting reality, it’s about human augmentation: leveraging exotic sensors, artificial intelligence, and big data to make humans better at being human.

What Musk is doing

Musk, himself, launched Neuralink this year – to augment humanity itself by linking us directly to the computers. If successful, this project and others like it will transform life on earth and free us from being supplanted by the machines. If we can’t beat them, we’ll join them.


Elon also talks about wanting regulation – wanting a body to look at the problem, and the solutions it’s offering, impassively, and treat it the same way one would any other tool that could be dangerous in the wrong hands. Not dangerous in a “the robots are coming to take my guns!” kind of way, but rather in the ways that make us wonder if life is worth living.

Regulations might be good. But I doubt they’ll be enough. They can’t stop the march of tech, they can’t apply that restriction globally, they’ll only slow it down. With luck, that delay will allow us to merge with the machines – and by joining with them, it will no longer be a cold AI that marches past us in intelligence and wisdom and memory and processing. It will be ourselves, marching past anything that humanity has ever been.