What are we talking about when we talk about AI Governance? First of all, we should take a step back and start from Artificial Intelligence. AI is a vast branch of computer science that deals with the construction of intelligent machines: systems based on algorithms able to perform tasks that usually require human intelligence.
Its origin is not as recent as you might think: at the end of the Second World War, Alan Turing, the mathematician who deciphered the Enigma code, published a paper named “Can machines think?”, laying the foundations for the ethical and ontological dilemma we still question today.
AI has become part of our lives in a subtle way, for at least twenty years now. It does not look like a cyborg, it turned out quite different from the collective imagination reflecting the science fiction films of the 1980s, but it did break into our daily life, through many services we use today.
Therefore, it is not Blade Runner with his Synthetics, nor the Terminator’s SKyNet, but it could still turn into a disturbing episode of Black Mirror, if it is not used in the correct way.
In fact, we tend to perceive it as harmless, as well as useful, but there is a point we still have to ponder: we have gradually delegated to AI a lot of our everyday choices.
Just think of how much we rely on online map services, such as Google Maps, for example, when we choose where we should go to dinner, we are delegating a machine which gives us a list of possible results based on geolocation or other needs: a range of possibilities classified in a specific order “decided” by the machine itself, and this deeply influences our choice.
This happens with many digital services that we use every day: from social networks, which populate our feed on the basis of what we “like”, but end up orienting our purchases – and not only that, they can even influence our political ideologies and beliefs – to on-demand video services, which show us TV series and films that we might like, in fact shaping our taste, while we kid ourselves thinking we instructed the algorithm with our previous choices.
The substrate is therefore always made up of data: the data we disseminate on the web, – liking, visiting places, leaving reviews – allows AI-based systems to anticipate us and suggest choices we could make in the future.
Delegating our choices to AI: implications and risks
What’s wrong with suspending our decisions and relying on algorithms, if that means to simplify our life and provide us with more personalized user experiences?
Apparently there’s nothing wrong, but we should carefully consider some aspects of the issue. First of all, what does the convenience of delegating take away from us, in terms of capability of doing something? For example, the use of the navigator has made us less able to move independently towards a destination or within a city.
So what happens when we delegate even more important decisions? What skills are we sacrificing in the name of convenience while using a machine?
We should be well aware that we cannot afford to delegate our ability to choose – in other words our free will – to an object.
Nevertheless Artificial Intelligence is certainly an enormous progress, a positive step forward that is providing companies with a considerable process efficiency, applied to any sector.
In fact, AI’s main advantage is saving a resource that we are increasingly lacking today: time.
Its processing speed allows you to accelerate and automate all those high effort and time-consuming activities, which do not require great interpretative efforts, leaving humans time for all the strategic thinking, the so-called human quid.
A notable example of this is the one cited by Paolo Benanti in his TED Talk “Topoi or digital myths”: the application of AI in the medical field – a very delicate sector – in the UK.
Doctors are supported by the language analysis and data interrogation capabilities of Artificial Intelligence in making the first general diagnoses, but they obviously never lose their human and personal responsibility in verifying the data empirically and in taking the final decision about how the patient should be treated.
Another positive application of AI is its support in industrial processes to optimize and make sustainable the production and logistics, with resulting improvements in terms of consumption and environmental impact.
AI is also used today to enhance Marketing performance: MarTech is based precisely on the management and processing of large amounts of data, which are enriched, segmented and activated using intelligent algorithms.
MarTech is on the rise and is facing new challenges in building processes more smoothly than before, combining data, human skills and technology.
This does not necessarily imply greater complexity, on the contrary: the wide availability of AI and Machine Learning solutions allows companies to connect different solutions with the so-called low-code or even no-code approach, i.e. without complicating their existing business systems.
According to a recent study by Gartner, in fact, marketing technologies and business intelligence tools are becoming more and more strategic for companies of all sizes and industries, respectively growing by + 50% and + 41% as adoption metrics.
Self-awareness and interpretation: the ethical doubt of AI
In a historical moment in which we will be increasingly supported by tools that facilitate our choices and increase our human capacity and intelligence, freeing us from repetitive actions, the greatest challenge is to keep a balance between the positive opportunity to enhance our human intelligence and the negative risk of a loss of awareness and self-determination.
Therefore, there is an ethical theme related to AI Governance, underlying the way we use Artificial Intelligence and how it integrates with our humanity.
Recently, Google leaked the news of LAMDA, an Artificial Intelligence that, according to engineer Blake Lemoine, would be sentient, aware of itself and of its own feelings.
The claim was soon denied by Mountain View – which also recently fired the employee – and criticized by many global AI experts.
The dialogue between Lemoine and the machine doesn’t show self-awareness, but simply a surprising ability to replicate human-like sentences, after LAMDA studied billions of texts and conversations.
In fact, the concept of AI learning is much closer to a correlation analysis than to our concept of learning, so the premise of the self-awareness questions is often wrong.
However, the news remains extremely interesting because it implies a doubt: the fact that an Artificial Intelligence, programmed by man, is able to make this man think whether is autonomous and endowed with conscience is in itself a serious matter.
Source: AI talent Archives
AI Governance for training and the work of the future
The need to train tomorrow’s professionals to these new issues is emerging, to teach them how to use these new tools with incredible potential, whose complexity is not only technological, but also ethical.
The professional skills needed are different from those required a decade ago: humans need to learn how to be effective in a context where automation has more and more space.
Companies are looking for people able to connect the dots, to combine technical skills and strategic, interpretative skills.
In this sense, the traditional dichotomy between scientific and humanistic disciplines comes to an end: the professionals of today and tomorrow will not be pure engineers devoted only to numbers or literati lost in volumes of poetry – but an effective synthesis between these two aspects, embodying de facto the Digital Humanism.
“Bionic” is a term that is making quite a comeback, but as mentioned at the beginning, it does no longer refer to the world of cyborgs. On the contrary, etymologically speaking, it indicates a combination of human and electronic life, an integration of biology and technology.
This means that organizations willing to adopt AI on a large scale should consider the human aspect in the use of technology as well, in a true advent of the Digital Renaissance.
To make the most of the benefits of Artificial Intelligence, Corporates and SMEs must focus on people’s talent and on creating human skills as a synthesis of science and humanism.
Human-machine interactions are relatively new and still evolving, and businesses cannot rely only on technical skills.
Furthermore, in order to make the entire ecosystem function optimally, companies must establish clear AI Governance and Data Governance, an operational and ethical model, and working methods suitable for this new organizational structure, which is the mirror of a new way of approaching the whole reality.