Skip to main content

Beyond the hype - How to use chatGPT to create value

Now, that we are in the middle of – or hopefully closer to the end of – a general hype that was caused by Open AI’s ChatGPT, it is time to reemphasize on what is possible and what is not, what should be done and what not. It is time to look at business use cases that are beyond the hype and that can be tied to actual business outcomes and business value.

This, especially, in the light of the probably most expensive demo ever, after Google Bard gave a factually wrong answer in its release demo. A factual error wiped more than $100bn US off Google’s valuation.

I say this without any gloating. Still, this incident shows how high the stakes are when it comes to large language models, LLM. It also shows that businesses need to have a good and hard look at what problems they can meaningfully solve with their help. This includes quick wins as well as strategic solutions.

From a business perspective, there are at least two dimensions to look at when assessing the usefulness of solutions that involve large language models, LLM.

One dimension, of course, is the degree of language fluency the system is capable of. Conversational user interfaces, exposed by chatbots or voice bots and digital assistants, smart speakers, etc. are around for a while now. These systems are able to interpret the written or spoken word, and to respond accordingly. This response is either written/spoken or by initiating the action that was asked for. One of the main limitations of these more traditional conversational AI systems is that they are better in understanding than in – lacking a better word – expressing themselves. Relying on well-trained machine learning models, they are also quite regularly able to surface a correct solution for problems in the problem domain that they are trained for. They usually work based on pretrained intents.

And, based on the training data, they usually give quite accurate responses to questions in their domain.

The problem: They are usually limited to a fairly small number of domains.

LLMs, on the other hand, are generally trained “to understand the relationships between words, phrases and sentences in a language. The goal is to have the LLM generate outputs that are semantically meaningful and reflect the context of the input.” This is part of ChatGPTs answer to the question what the purposes of an LLM is. The training set of an LLM is usually a vast amount of “real world” knowledge that usually comes from publicly available sources – aka the Internet. The output itself can be in written, graphical or other formats.

What LLMs excel in is generating responses to questions in a human way. And they can respond to a wide variety of topics. When focusing on text, they are built to generate coherent and meaningful responses.

The problem: They sometimes lack accuracy and give wrong output with full confidence. Even worse, wrong or inaccurate output is not easily identifiable by a user without the requisite knowledge. Again, refer to the Google Bard example that (temporarily, at least) wiped off $100 billion US from Googles valuation. Not picking on Google, there are plenty of examples around that call out ChatGPT or You.com or other tools.

Consequently, the other dimension to look at is accuracy.

The question is whether both dimensions always matter equally or not. In a business sense, one can argue that accuracy matters always. Receiving factual errors in a business conversation is not only a poor customer experience but may in extreme cases even lead to legal issues.

What is also important to understand is that the more accuracy is required the more the necessity of integrating additional systems to augment the LLM increases. An LLM on its own is not much more than some form of entertainment. Even in search engines, LLMs only augment the search by enabling natural language queries and the delivery of results in human language instead of a mere link list.

At least they should do this.

With all this being said, what are business use cases involving a large language model? As said, there needs to be a reasonable accuracy. Obviously, they require fluency as a precondition, as fluency is the core differentiator of an LLM.

Let’s look at some use cases in no particular order of priority.

LLM business use cases that can be implemented already now


  • I’d start with something that I’d call “storytelling”. This is basically the creation of market-relevant documents that describe the capabilities and differentiating factors of a product, solution, or service. Being somewhat marketing related (no offence intended) and a first point of contact for customers, it needs to be easy to understand without requiring a great deal of technical accuracy. At the same time, it must not be wrong. A stripped-down version of this could be the (improved) generation of social media content, e.g., tweets. Benefits are faster creation of high-level content for general websites but also, more specifically, for ABM scenarios and landing pages. To be able to create this text, an LLM needs to be connected to internal systems holding requirements, specifications as well as communications between the involved persons. This is also a use case that should be implement-able near-term.

  • One of the main tasks of people is the writing of, and more so, responding to emails. Especially, in sales scenarios, customer inquiries can get formulated and suggested based upon previous emails and the context given by the CRM system, e.g., about proposals made. This scenario would already require quite a high accuracy to avoid sending out faulty information that might be legally binding. The benefit of this scenario is a significant reduction time needed to send emails, resulting in increased productivity. It is a scenario that Microsoft has already implemented in its Viva Sales solution.

  • Generation of documentation is a scenario that somewhat varies in the requirement for fluency. It can be mainly divided into technical and user documentation. While user documentation needs to be extremely readable, the writing style is somewhat less important for technical documentation. Conversely, technical documentation likely needs to have a high degree of technical accuracy that is not needed in user documentation, which means that either different repositories or different parts of source documents need to be used to create the texts and potentially diagrams and images.

  • One of the most promising use cases in the short term is customer service, including enterprise search. Here, users want answers to their questions, not just links or something actioned. To achieve this, it is necessary to connect to a conversational AI, business systems and a well-functioning knowledge base that helps in generating accurate answers when searching for something. The actioning of issues is very similar to what conversational AIs do already now. The differences are that the intent detection can be far better as the LLM can create more than enough training sets for this and that the answers given by the system are far more fluent. The same holds true for an inquiry scenario. However, as a word of caution, the accuracy of responses to inquiries depends heavily on the kb content that gets searched by the enterprise search. Therefore, the kb needs continuous and rigorous scrutiny. If this is given, the benefits lie in increased call deflection and customer satisfaction. Properly implemented, benefits include an improved call deflection as more cases can get handled by the system, combined with an increased customer satisfaction as issue handling can become quite easy and efficient for the customer. Cognigy has recently presented some very good examples (here and here) that also include voice in- and output.

  • Agent assistance is somewhat easier to implement, as it mostly needs to connect to the customer service application, including the chat history. Having complete access to sales and marketing data, of course is helpful, too. Combined with a sentiment analysis, the LLM can suggest text blocks for the agent to use. The benefits of this are an increased agent efficiency and quite possibly also higher customer satisfaction as the text blocks do exhibit more empathy with the customer’s situation than texts generated without an LLM.

In summary, these five scenarios show use cases involving an LLM that are beyond the hype. They can get implemented in a short time and they can also be easily tied to business outcomes. That way, their benefits can get measured.

Which other use cases do you see? And how would you tie them to business value? 

Comments

Last Year's Top 5 Popular Posts

SAP CRM and SAP Jam - News from CRM evolution

During CRM Evolution 2017 I had the chance of talking with Volker Hildebrand and Anthony Leaper from SAP. Volker is SAP’s Global Vice President SAP Hybris and Anthony is Senior Vice President and Sales GM - Enterprise Social Software at SAP. Topics that we covered were things CRM and collaboration, how and where SAP’s solutions are moving and, of course, the impact that the recent reshuffling in the executive board has. Starting with the latter, there is common agreement, that if at all it is positive as likely to streamline reporting lines and hence decision processes. First things first – after all I am a CRM guy. Having the distinct impression that the SAP Hybris set of solutions is going a good way I was most interested in learning from Volker about how there is going to be a CRM for S4/HANA. SAP’s new generation ERP system is growing at a good clip, and according to the Q1/2017 earnings call, now has 5,800 customers with 400 new customers in the last quarter alone. Many

How to play the long game Zoho style

The news On February 7 and 8 2024, Zoho held its annual ZohoDay conference, along with a pre-conference get together and an optional visit to SpacX’s not-too-far-away Starbase. Our guide, who went by Chief, and is probably best described as a SpaceX-paparazzi was full of facts and anecdotes, which made the visit very interesting although we couldn’t enter Starbase itself. The event was jam-packed with 125 analysts, 17 customer speakers, and of course Zoho staff for us analysts to talk to. This was a chance we took up eagerly. This time, the event took place in MacAllen, TX, instead of Austin, TX. The reason behind this is once more Zoho’s ruralization strategy, transnational localism.  Which gives also one of the main themes of the event. It was more about understanding Zoho than about individual products, although Zoho disclosed some roadmaps. More about understanding Zoho in a second.  The second main theme was customer success and testimonials. Instead of bombarding us with presenta

Reflecting on 2023 with gratitude - What caught your interest

A very happy, healthy and prosperous new year to all of you. This is also the time to review my blog and to have a look what your favourite posts of 2023 have been. With 23 posts, I admittedly have been somewhat lazy in 2023. Looking at the top ten read posts in 2023, there is a clear clustering about a few topics, none of them really surprising. There is a genuine interest in CX, ChatGPT, and vendors.  Again, this is not a surprise.  Still, there are a few surprises in the list! So, without further adoo, let’s hear the drumroll for your top five favourite posts on my blog – in ascending order. After all, some suspense cannot harm. The fifth place gets claimed by my review of ZohoDay 2022 – “ Don’t mess with Zoho – A Zohoday 2022 recap ”. Yes, you read that right. This is a 2022 post. The fourth place got claimed by another article on Zoho, almost one year younger: Zoho, how a technology company reimagines business software . It is a reflection on the Zoholics 2023 conference in Austin

The Generative AI Game of Thrones - Is OpenAI toast?

The News This has been an exciting weekend for the generative AI industry. On Friday November 17, OpenAI announced that the company fired its figurehead CEO Sam Altmann and appointed Chief Technology Officer Mira Murati as interims CEO in a surprise move. The press release states that Altmann “ was not consistently candid in his communications with the board .” Surprised was apparently not only Sam Altmann, but also the till then chairman of the board Greg Brockman who first stepped down from this position and subsequently quit OpenAI. Investors, notably Microsoft, found themselves blindsided, too – or flat footed depending on the individual point of view. Satya Nadella was compelled to state that Microsoft stays committed to the partnership with OpenAI in a blog post that got updated on November 19, 11:55 pm. All hell broke loose. Microsoft shares took a significant hit. A number of additional senior OpenAI personnel quit. Both, Altman and Brockman, voiced the idea of founding anoth

Salesforce stock tanks after earnings report - a snap analysis

The news On May 29, 2024, Salesforce reported its results for the first quarter of the fiscal year 2025. Highlights are a total quarterly revenue of $9.133bn US, resembling a year-over-year growth of 11 percent a current remaining performance obligation of $26.4bn US a remaining performance obligation of $53.9B US an operating margin of 18.7 percent. diluted earnings per share of $1.56 The company reported a revenue guidance of $9.2bn - $9.25bn US for the next quarter and a full year guidance of $37.7bn - $38.0bn US, resembling growth rates of 7 – 8 percent and 8 – 9 percent, respectively. With these numbers, Salesforce ended up at the lower end of last quarter’s guidance on the revenue growth side while exceeding the earnings per share projection and slightly lowered the guidance for the fiscal year 2025. The result: The company’s share price dropped from $272 to bottom out at $212. The bigger picture Salesforce is the big gorilla in the CRM and CX industry. The company has surpassed