Skip to main content

How to make efficient use of generative AI

Generative AI is here to stay.

It is not only a hype that probably gets worse before it gets better. And we clearly still are in a hype, as the following chart showing the search interest for ChatGPT between October 1, 2022 and April 12, 2023 from Google Trends shows.

Similarly, the Gartner Group sees Generative AI technology approaching the peak of inflated expectations in its 2022 hype cycle for artificial intelligence.

To be sure, we see only the tip of the iceberg when looking at voice, text or image based services that we all know and use. The Gartner Group also foresees many industrial use cases reaching from drug and chip design to the design of parts to overall solutions. 

You think that these scenarios lie far in the future? Read this Nature article from 2021 and think again.

And in contrast to some of the other hypes that we have seen in the past few years, there are actual use cases that support the technology’s survival of the trough of disillusionment. As there are viable use cases, unlike “Metaverse”, Blockchain or NFTs have shown, generative AI is not a solution in search of its problem.

Apart from OpenAI’s GPT and Dalle-E models that surely caught everybody’s attention in the past weeks and months, there are a good number of large language models that are just less known. A brief research that I recently conducted, unearthed more than 50 models that got published over the past few years. For their paper A Survey of Large Language Models that focuses on “review[ing] the recent advances of LLMs by introducing the background, key findings, and mainstream techniques”, a group of AI researchers identified a similarly impressive number of large language models that got developed in the past few years.

Fig. 1: A timeline of large language models; source: A Survey of Large Language Models

This increased competition, along with research into how the resource consumption of large models can be reduced, e.g. through sparsification, will lead to improved pricing.

Consequently, businesses need to find a way to leverage this technology without harassing their own data security and/or exposing their intellectual property (IP).

That this is easily possible, has been already learned the hard way by companies as diverse as Amazon or Samsung, to name just two better known cases. And then, there are ongoing use cases around the unauthorized use of IP-protected data for training and even many questions around the ownership of generated (derivative?) work products of generative AI, not even speaking in unwanted bias. 

The way to be walked involves three major steps:

Education

It is necessary to educate the teams not only on the benefits of using generative AI but also on how to check the output and, importantly, when to not use generative AI. Every employee has a role in keeping the business protected from avoidable mistakes. And no employee wants to make one of those mistakes. Sources that are used for any content that is generated should to be attributed and/or verifiably not be copyrighted. Create some sensitivity about “when in doubt, don’t use it”. Enable your people. For the time being, this training should be frequently reviewed, as the whole area is evolving fast.

Second, it is important to enable personnel to fine tune and then train large language models. This is not necessarily a trivial task that requires some knowledge about machine learning.

Governance

As with most technologies, there is a need for governance to augment the training. Generative AI is a very powerful tool, so it needs to be used responsibly and ethically. To help the employees with their own decision making about when and how it is permissible to use generative AI services in the course of their work, and according to the corporate values, it is necessary to develop, publish and implement a policy that outlines and explains the relevant guidelines to follow.

Points that should be covered by the policy include;

  • Who is allowed to use generative AI services. Depending on numerous factors like corporate culture, sensitivity of information dealt with, etc. it may be useful to explicitly limit the use of generative AI.

  • Proprietary or confidential information must not be revealed or disclosed.

  • Similar to other policies (e.g. use of the Internet in general), there may be a limitation to business use or the permission to also use it for private purposes.

  • The services should not be used for tasks that are violating any law, regulation, or the company’s code of conduct. This involves discrimination, offensive or hate speech that should be prohibited. This also includes content that is intended to deceive or defraud others.

  • Whatever the generative AI produces needs to be reviewed. 

  • The use of generative AI to create a work product should be disclosed in order to maintain transparency.

The policy should also include a reference to the offered training that personnel should undertake. Last, but not least, and as bad as it sounds, the consequences of violating the policy must be made clear as well. Here, too, I would concentrate on explanation and rationalizing instead of just wielding a stick.

It might be useful to review this policy frequently, as we are still in a learning period.

Following my own suggestion here: I have used You.com and Google Bard to give me some points that the policy should cover.

Execution

In parallel to this effort, it is time to look for use cases that can meaningfully be implemented. Collect and assess them. Meaningful means that the use cases are important enough to add value and isolated enough to still be able to manage any risks. These use cases can include improving existing capabilities or adding additional ones. What is important is that for all use cases there need to be KPIs that shall get improved and a cost-benefit analysis.

Remember: Not every problem needs to be solved with the help of a LLM! For some problems it is more efficient to use more “traditional” technologies.

At the same time, it is important to be aware of the level of LLM-knowledge that is available in house. This level is, especially in smaller organizations, not too high. The degree of available knowledge is a boundary factor for project execution. So, it is best to start off with a problem that requires only minor fine-tuning of an existing LLM that is powerful enough to to support multiple of the collected use cases. This way, it is possible to learn fast while also getting results fast. Additionally, fine tuning a model comes at far lower cost than fully training a model, as the number of required training cycles is smaller and the amount of necessary training data is far lower. Out of the many available models (see above), one or more will be capable of supporting most of the selected use cases after fine tuning.

The way to success is thinking big while acting small and to walk before starting to run.


 

Comments

Last Year's Top 5 Popular Posts

SAP CRM and SAP Jam - News from CRM evolution

During CRM Evolution 2017 I had the chance of talking with Volker Hildebrand and Anthony Leaper from SAP. Volker is SAP’s Global Vice President SAP Hybris and Anthony is Senior Vice President and Sales GM - Enterprise Social Software at SAP. Topics that we covered were things CRM and collaboration, how and where SAP’s solutions are moving and, of course, the impact that the recent reshuffling in the executive board has. Starting with the latter, there is common agreement, that if at all it is positive as likely to streamline reporting lines and hence decision processes. First things first – after all I am a CRM guy. Having the distinct impression that the SAP Hybris set of solutions is going a good way I was most interested in learning from Volker about how there is going to be a CRM for S4/HANA. SAP’s new generation ERP system is growing at a good clip, and according to the Q1/2017 earnings call, now has 5,800 customers with 400 new customers in the last quarter alone. Many...

How to play the long game Zoho style

The news On February 7 and 8 2024, Zoho held its annual ZohoDay conference, along with a pre-conference get together and an optional visit to SpacX’s not-too-far-away Starbase. Our guide, who went by Chief, and is probably best described as a SpaceX-paparazzi was full of facts and anecdotes, which made the visit very interesting although we couldn’t enter Starbase itself. The event was jam-packed with 125 analysts, 17 customer speakers, and of course Zoho staff for us analysts to talk to. This was a chance we took up eagerly. This time, the event took place in MacAllen, TX, instead of Austin, TX. The reason behind this is once more Zoho’s ruralization strategy, transnational localism.  Which gives also one of the main themes of the event. It was more about understanding Zoho than about individual products, although Zoho disclosed some roadmaps. More about understanding Zoho in a second.  The second main theme was customer success and testimonials. Instead of bombarding us with...

Reflecting on 2023 with gratitude - What caught your interest

A very happy, healthy and prosperous new year to all of you. This is also the time to review my blog and to have a look what your favourite posts of 2023 have been. With 23 posts, I admittedly have been somewhat lazy in 2023. Looking at the top ten read posts in 2023, there is a clear clustering about a few topics, none of them really surprising. There is a genuine interest in CX, ChatGPT, and vendors.  Again, this is not a surprise.  Still, there are a few surprises in the list! So, without further adoo, let’s hear the drumroll for your top five favourite posts on my blog – in ascending order. After all, some suspense cannot harm. The fifth place gets claimed by my review of ZohoDay 2022 – “ Don’t mess with Zoho – A Zohoday 2022 recap ”. Yes, you read that right. This is a 2022 post. The fourth place got claimed by another article on Zoho, almost one year younger: Zoho, how a technology company reimagines business software . It is a reflection on the Zoholics 2023 conference ...

Salesforce stock tanks after earnings report - a snap analysis

The news On May 29, 2024, Salesforce reported its results for the first quarter of the fiscal year 2025. Highlights are a total quarterly revenue of $9.133bn US, resembling a year-over-year growth of 11 percent a current remaining performance obligation of $26.4bn US a remaining performance obligation of $53.9B US an operating margin of 18.7 percent. diluted earnings per share of $1.56 The company reported a revenue guidance of $9.2bn - $9.25bn US for the next quarter and a full year guidance of $37.7bn - $38.0bn US, resembling growth rates of 7 – 8 percent and 8 – 9 percent, respectively. With these numbers, Salesforce ended up at the lower end of last quarter’s guidance on the revenue growth side while exceeding the earnings per share projection and slightly lowered the guidance for the fiscal year 2025. The result: The company’s share price dropped from $272 to bottom out at $212. The bigger picture Salesforce is the big gorilla in the CRM and CX industry. The company has surpassed ...

Zoho - A True Unicorn

End of January Zoho held its 2020 Zoho Days, an analyst summit, which I was happy to attend, along with more than 60 colleagues, as the only analyst from Germany, as it seems. Sadly, it took me quite a while to complete this – Zoho deserves a faster commentare. But hey, let’s look forward and get rolling. Zoho is a privately owned enterprise software company that has quietly evolved from a small software company in 1996 to an ambitious global player that serves the SMB- and enterprise CRM market with cloud applications. The company has a set of 45+ business apps with more than 50 million users, 10 data centres and counting, and is available in 180 countries. The company is profitable and maintained a CAGR of more than 30 percent over the past five years. But why quietly? Because Zoho managed its growth pretty unusually (almost) fully organically with only very minor acquisitions. Crunchbase lists one. Following this unique approach, which defies the tradit...