Information Technology
Article
News
Case studies
Trainer profiles

Generative AI and corporate life

Technologia
by Technologia
Technologia
Generative AI and corporate life

Generative AI has made a dramatic entry into the world of work, where myths and fantasies rub shoulders, while the impact on the organization of work is still uncertain. For employees, it's a question of anticipating their potential replacement and evolving their skills accordingly; for organizations it's not a question of whether they'll use it, or when... but how AI can help them maximize their revenues (without compromising their data).

This is the starting point for a recent survey conducted by Insight1 on the possibilities of generative AI to capture decision-makers' views on the matter.

What is generative AI?

This is a sub-category of AI that generates content (text, images, music) that looks like it was created by humans. It differs from conversational AI in its ability to produce unique, probabilistic responses based on training data and instructions provided.

The arrival of AI is reminiscent of the arrival of office automation, then the Internet, then cloud computing, all of which were characterized by significant organizational change, with no turning back.

Why use generative AI in business?
Executives surveyed are looking to adopt generative AI first and foremost to improve productivity and customer service, even as they wonder in the same breath how to integrate this technology without creating employee over-dependence on these new tools. Let's be honest, this last concern isn't at the top of the list, especially as AI will undoubtedly lead to a rationalization of the workforce (in other words, redundancies).

What uses for generative AI in business?

The most obvious uses involve improving employee productivity (e.g., content creation), customer engagement (via chatbots, increasingly effective and relevant conversational robots), R&D, automated software development, decision-making support, notably through the ability to produce or analyze data, and much more besides.

Every medal has its reverse side, however, and the transition from situation to practice is not so simple. Several criteria need to be taken into account:

  • Compliance: organizations need to comply with the law, in particular Bill 25 on the use and protection of personal information. And users are very aware (and reassured) of this obligation.
  • Security: directly linked to the previous point, security is essential for any organization, especially with the growing number of computer attacks, some of which are AI-based! But this is not the only issue: the major AI models (Gemini, ChatGPT, Co-pilote and others) are proprietary solutions. To take advantage of their power, organizations need to share their data with them... which is problematic from a legal and intellectual property point of view. This is why many organizations have asked their employees to stop using ChatGPT at all costs (2). It also explains the rise of alternative solutions, based on Open Source AI (such as Mistral in France), which can be deployed locally and parameterized according to the organization's own needs.
  • Data quality: this influences the relevance of the results delivered. The effectiveness of AI depends on the ability to provide it with data, lots of data, and reliable data. Hence the interest in open source AI (open code), as mentioned above. This kind of approach makes it possible to develop secure, tailor-made solutions that are less greedy, more efficient and fed by the organization's own data.
  • Biases: these are regularly observed, as the programming of algorithms is unconsciously influenced by their authors. Another bias to be "wary of" is blind trust in results provided by a machine. As an example (3), several men have been unjustly accused of crimes because the police's facial recognition system (supported by AI) had identified them as suspects, despite obvious contradictory elements (such as the morphology of one of them, which in no way matched that of the culprit filmed by the surveillance cameras, or simply the impossibility of being at the scene).
  • Resources consumed: it's not often mentioned, but AI consumes an enormous amount of water, particularly in the manufacture of chips, as well as energy to carry out its calculations and respond to the multitude of requests it receives (4). Google claims that its AI consumes energy equivalent to that of 500,000 inhabitants. It's a safe bet that this aspect will soon be part of the public debate.
  • Query quality: the art of the "prompt", or how to write a command correctly, requires learning and - for the time being - a lot of trial and error before producing a truly relevant result.
  • A brake on innovation: this may seem counter-intuitive, given that we expect AI to multiply the possibilities by analyzing data. However, there are two factors that can slow down an organization's ability to innovate. The first is simply that AI currently only consumes available data to build its "recommendations". In the short term, it will find itself recycling data that it has itself produced, to provide analyses that will be of no use whatsoever. The second is that humans will be "relying" on AI for all their questions, where it was precisely their strength of thought and critical thinking that enabled them to come up with original ideas. AI can encourage a certain laziness in our brains, contributing to regression rather than innovation.

It's up to companies to take corrective measures in the face of these eventualities, and the list is not exhaustive.

Internal policy and governance

Considering the nature and variety of risks associated with generative AI, organizations need to clarify their internal policy and data governance.

Among the points cited by Insight as requiring guidance at organizational level are:

  • The roles and responsibilities of those in charge of implementing generative AI;
  • Clear, communicated and enforced governance;
  • A statement of best practice for training learning models;
  • A clear security and confidentiality policy (who has access to what, at what level, what data must be encrypted, what the response plan is in the event of an incident, etc.);
  • Well-established rules regarding the notion of intellectual property (and plagiarism) and its respect;
  • Follow-up of industry recommendations on compliance and legal obligations;
  • Employee training in various protocols;
  • Monitoring and analysis processes.

AI, a win-win tool?

The tremendous possibilities offered by AI should enable everyone to benefit: organizations gain in efficiency, productivity and relevance... and employees too! Provided that AI is implemented and deployed step by step.

Companies need to start small and develop a proof of concept. Something measurable before considering global deployment. This means clarifying expectations upstream, beyond budget lines, in order to assess real progress. This step-by-step approach also allows both staff and tools to adapt.
Once this phase has been completed, all that remains to be done is to identify the projects that offer the greatest potential for added value and rapid gains, in order to facilitate buy-in. If time is of the essence, don't confuse speed with haste.

To find out more :

ITSM: Practice Management and Governance

(1) BEYOND HYPOTHETICALS: Understanding the Real Possibilities of Generative AI

[2] It should be noted that there is an "enterprise" version of ChatGPT, which is supposed to enable greater data confidentiality, among other things. However, it remains a proprietary solution.

[3] How Wrongful Arrests Based on AI Derailed 3 Men's Lives - Wired

[4] ChatGPT & co : pourquoi le coût énergétique de l'IA pose un vrai problème

Similar articles

See all our articles