How The Social Dilemma demonized AI- based Models, Addictive Design and Growth Hacking
The new Netflix documentary “The Social Dilemma” portrays how popular social services like Facebook, Instagram, or Google manipulate the users without them even realizing it.
Except for actors who play a few scenes showing how easily it is to get sucked into the personalized Facebook feed, also former employees of the tech giants give their take on how this whole machine works from their perspective.
As a tech person, with a marketing and UX background, I decided to share my view on a few aspects discussed in the film.
Let’s begin with a bit of the background.
It is important to separate addictive design from AI-based models.
If you have ever founded a startup, or are planning to do so, you probably ran into a few books that are recommended as a must read.
One of them is “Hooked:How to Build Habit-Forming Products” by Nir Eyal (2004). The book is a great introduction to the concepts of gamification and addictive design. It answers questions like: what makes us engage with the product, and what makes us want to come back?
Let’s be honest- when you decide to build a product, no matter whether that is a tech innovation, or a product for everyday use, your wish is that people will use it.
It seems logical that while you build something, you are looking for mechanisms that will make users more engaged, so that they will want to come back.
If, in addition to delivering value, you also make it engaging — what we’ll call addictive design — you have a high chance to become successful.
It is not a novel concept — casinos, computer games and even television formats are built based on similar ideas.
Most of the interviewees in the documentary claim that they stand behind creating addictive mechanisms, but the purpose of the Like Button for example, was to spread love and positive emotions, not to get people hooked on a social app.
If we dig a little deeper, we can quickly find out that although the Like Button was developed in 2007, it was launched by Facebook in 2009, as a result of 2 years of testing. So it was probably no secret that it increased user engagement, and what followed was an addictive design.
Tech as magic
When we were building Stormly, we liked to refer to what it does as magic. However, using that term meant that we have something uncontrollable and unpredictable. On the other hand, what we did was simply bringing together technologies based on code and models.
I think the same refers to Artificial Intelligence. When people think of AI, the first thing that comes to mind are human- like creatures that will eventually want to destroy humanity. I think “Westworld” covered that vision pretty well.
However, we have been using AI for quite a while now. Simple chatbots, or Spotify suggesting you news songs you will most likely enjoy listening- are also built using AI. The impersonation of a model in the Netflix documentary is asking itself: “Did you ever think if these recommendations are bad for him?” (the human) And answers with a blunt “no”. It’s similar to asking a line of code if it can feel love or sadness.
On the other hand, tech, seen as illusionary magic that tricks people into certain behavior, is an obvious way of warning people against big tech companies. But at the same time, it is demonizing the entire industry. Tech can be used for good, but that is up to the people who create it and what goals they set for those AI models.
AI+ML- based Models that guarantee that users will do something
“This is what every business has always dreamt of- to have a guarantee that if it places an ad, it will be successful.”
Let’s move to a few statements regarding, presumably, digital advertising. Professor Zuboff presented the digital ad industry as opposed to traditional media — a tool to provide guaranteed results. That would indeed be close to magic. In reality, while you can target the ads to the right audience with the right message, there are no guarantees. In fact, even with precisely targeted campaigns, the average conversion rate is between 2% and 15%, a wide range indeed.
Recommender models as evil creatures
Models can, for example, recommend content to users in the real time, based on their interest, demographics, and content they interacted with in the past. They can also predict your actions, based on past behavior.
They indeed learn and improve with more data, and that is the whole cool thing about AI and ML.
But I think we are missing the fact that the data is delivered by users themselves. If you are very active on social media and post images, react to every event and so on, that data is being fed to the algorithm and basing on that, it is giving you suggestions.
Again, we have to remember that the models don’t set goals for themselves.
If the goal is higher engagement, the model then recommends conspiracy theories because they engage a certain group of people. Or, the content creators and ad creatives figure that it’s actually better to write depressing content because it leads to more conversion.
In the end it’s the content being created that’s evil; not the model itself.
“If you are not paying for the product, then you are the product”
That I think is one of the most important things we have to always keep in mind when moving around the tech space. For some reason, I used to think that the apps should be free myself, until I understood that every product has a different monetization model and most of the time, it is best to pay for the service and be sure that I am the client, not the product.
“They have more information about us that has ever been imagined in a human history”
In a time when everyone owns a smartphone, smartwatch and other devices, data is being recorded almost non-stop about our activities. The amount of data recorded keeps increasing, and this pattern is difficult to stop. Nowadays all communication has moved to apps, and people are hardly ever using a phone for just calling. But what matters is to distance yourself from that device and dig into how it works and influences us.
“Teams of engineers whose job is to hack people’s psychology ( so that they can get more growth).”
I think most growth hackers will feel very flattered when they hear that description. There is indeed a relatively new field called Growth Hacking that originated in helping startups grow really fast, in a short period of time, with low budgets. It is the mix of strategy, right targeting and experimentation that is now also being picked up by large companies. This is more or less the idea of marketing, not just on the internet, but in general. I guess that is also what Mel Gibson did in “What Women Want”, and what we have been doing for generations.
“After all this testing, we figured out: get any individual to seven friends in ten days”
It is also called the “aha-moment” and is one of the most important metrics you should discover about your product. As soon as you discover your aha-moment, you can design your product around it and increase your chances for engagement and growth. At Stormly, we have always been excited about these kinds of metrics, and developed a way for anyone to find it with a single click.
Anyone with an interest in how new technologies evolve should watch this documentary. We live in a time where “everyone has their own truth” and it has become impossible to have a constructive conversation about polarizing topics like politics, climate change or human rights. It is also portraying more or less how AI+ML based models work, which can be very powerful and accurate. However, it is important to remember that as with every powerful tool, it can do great things for humans, but also bring problems in the wrong hands.
As long as the richest and most powerful companies will not switch their goals, which is constant growth, and will not know when to stop, powerful technologies will keep overwhelming kids and adults. But it’s the people in power that make those decisions, not technology itself.