Skip to main content

Beyond the hype - How to use chatGPT to create value

Now, that we are in the middle of – or hopefully closer to the end of – a general hype that was caused by Open AI’s ChatGPT, it is time to reemphasize on what is possible and what is not, what should be done and what not. It is time to look at business use cases that are beyond the hype and that can be tied to actual business outcomes and business value.

This, especially, in the light of the probably most expensive demo ever, after Google Bard gave a factually wrong answer in its release demo. A factual error wiped more than $100bn US off Google’s valuation.

I say this without any gloating. Still, this incident shows how high the stakes are when it comes to large language models, LLM. It also shows that businesses need to have a good and hard look at what problems they can meaningfully solve with their help. This includes quick wins as well as strategic solutions.

From a business perspective, there are at least two dimensions to look at when assessing the usefulness of solutions that involve large language models, LLM.

One dimension, of course, is the degree of language fluency the system is capable of. Conversational user interfaces, exposed by chatbots or voice bots and digital assistants, smart speakers, etc. are around for a while now. These systems are able to interpret the written or spoken word, and to respond accordingly. This response is either written/spoken or by initiating the action that was asked for. One of the main limitations of these more traditional conversational AI systems is that they are better in understanding than in – lacking a better word – expressing themselves. Relying on well-trained machine learning models, they are also quite regularly able to surface a correct solution for problems in the problem domain that they are trained for. They usually work based on pretrained intents.

And, based on the training data, they usually give quite accurate responses to questions in their domain.

The problem: They are usually limited to a fairly small number of domains.

LLMs, on the other hand, are generally trained “to understand the relationships between words, phrases and sentences in a language. The goal is to have the LLM generate outputs that are semantically meaningful and reflect the context of the input.” This is part of ChatGPTs answer to the question what the purposes of an LLM is. The training set of an LLM is usually a vast amount of “real world” knowledge that usually comes from publicly available sources – aka the Internet. The output itself can be in written, graphical or other formats.

What LLMs excel in is generating responses to questions in a human way. And they can respond to a wide variety of topics. When focusing on text, they are built to generate coherent and meaningful responses.

The problem: They sometimes lack accuracy and give wrong output with full confidence. Even worse, wrong or inaccurate output is not easily identifiable by a user without the requisite knowledge. Again, refer to the Google Bard example that (temporarily, at least) wiped off $100 billion US from Googles valuation. Not picking on Google, there are plenty of examples around that call out ChatGPT or or other tools.

Consequently, the other dimension to look at is accuracy.

The question is whether both dimensions always matter equally or not. In a business sense, one can argue that accuracy matters always. Receiving factual errors in a business conversation is not only a poor customer experience but may in extreme cases even lead to legal issues.

What is also important to understand is that the more accuracy is required the more the necessity of integrating additional systems to augment the LLM increases. An LLM on its own is not much more than some form of entertainment. Even in search engines, LLMs only augment the search by enabling natural language queries and the delivery of results in human language instead of a mere link list.

At least they should do this.

With all this being said, what are business use cases involving a large language model? As said, there needs to be a reasonable accuracy. Obviously, they require fluency as a precondition, as fluency is the core differentiator of an LLM.

Let’s look at some use cases in no particular order of priority.

LLM business use cases that can be implemented already now

  • I’d start with something that I’d call “storytelling”. This is basically the creation of market-relevant documents that describe the capabilities and differentiating factors of a product, solution, or service. Being somewhat marketing related (no offence intended) and a first point of contact for customers, it needs to be easy to understand without requiring a great deal of technical accuracy. At the same time, it must not be wrong. A stripped-down version of this could be the (improved) generation of social media content, e.g., tweets. Benefits are faster creation of high-level content for general websites but also, more specifically, for ABM scenarios and landing pages. To be able to create this text, an LLM needs to be connected to internal systems holding requirements, specifications as well as communications between the involved persons. This is also a use case that should be implement-able near-term.

  • One of the main tasks of people is the writing of, and more so, responding to emails. Especially, in sales scenarios, customer inquiries can get formulated and suggested based upon previous emails and the context given by the CRM system, e.g., about proposals made. This scenario would already require quite a high accuracy to avoid sending out faulty information that might be legally binding. The benefit of this scenario is a significant reduction time needed to send emails, resulting in increased productivity. It is a scenario that Microsoft has already implemented in its Viva Sales solution.

  • Generation of documentation is a scenario that somewhat varies in the requirement for fluency. It can be mainly divided into technical and user documentation. While user documentation needs to be extremely readable, the writing style is somewhat less important for technical documentation. Conversely, technical documentation likely needs to have a high degree of technical accuracy that is not needed in user documentation, which means that either different repositories or different parts of source documents need to be used to create the texts and potentially diagrams and images.

  • One of the most promising use cases in the short term is customer service, including enterprise search. Here, users want answers to their questions, not just links or something actioned. To achieve this, it is necessary to connect to a conversational AI, business systems and a well-functioning knowledge base that helps in generating accurate answers when searching for something. The actioning of issues is very similar to what conversational AIs do already now. The differences are that the intent detection can be far better as the LLM can create more than enough training sets for this and that the answers given by the system are far more fluent. The same holds true for an inquiry scenario. However, as a word of caution, the accuracy of responses to inquiries depends heavily on the kb content that gets searched by the enterprise search. Therefore, the kb needs continuous and rigorous scrutiny. If this is given, the benefits lie in increased call deflection and customer satisfaction. Properly implemented, benefits include an improved call deflection as more cases can get handled by the system, combined with an increased customer satisfaction as issue handling can become quite easy and efficient for the customer. Cognigy has recently presented some very good examples (here and here) that also include voice in- and output.

  • Agent assistance is somewhat easier to implement, as it mostly needs to connect to the customer service application, including the chat history. Having complete access to sales and marketing data, of course is helpful, too. Combined with a sentiment analysis, the LLM can suggest text blocks for the agent to use. The benefits of this are an increased agent efficiency and quite possibly also higher customer satisfaction as the text blocks do exhibit more empathy with the customer’s situation than texts generated without an LLM.

In summary, these five scenarios show use cases involving an LLM that are beyond the hype. They can get implemented in a short time and they can also be easily tied to business outcomes. That way, their benefits can get measured.

Which other use cases do you see? And how would you tie them to business value? 


Last Year's Top 5 Popular Posts

SugarCRM explains how the third wave of CRM adds value

The news On October 4 and 5, 2023, SugarCRM held its Connected event followed by an analyst summit in London. The first day – Connected – was targeted mostly at customers while the second day focused on analysts.  The event started off with an intense speech by Katherine Grainger, DBE , a British rowing champion. Her core messages were about team bonding, the importance of communication, continuous improvement, and perseverance (well, at least that’s my take). This was followed by information about what is new in the software and, more importantly, a customer panel.  The main sponsor, Mobileforce , placed some words about the partnership. In addition, the analysts had 1:1s with customers, partners, and Sugar executives. The second day was filled with information targeted at analysts. CEO Craig Charlton and his executive team shared about financial status, strategy and more in-depth product news. Sugar being a privately held, VC backed company, the financials are of course under NDA, s

How to play the long game Zoho style

The news On February 7 and 8 2024, Zoho held its annual ZohoDay conference, along with a pre-conference get together and an optional visit to SpacX’s not-too-far-away Starbase. Our guide, who went by Chief, and is probably best described as a SpaceX-paparazzi was full of facts and anecdotes, which made the visit very interesting although we couldn’t enter Starbase itself. The event was jam-packed with 125 analysts, 17 customer speakers, and of course Zoho staff for us analysts to talk to. This was a chance we took up eagerly. This time, the event took place in MacAllen, TX, instead of Austin, TX. The reason behind this is once more Zoho’s ruralization strategy, transnational localism.  Which gives also one of the main themes of the event. It was more about understanding Zoho than about individual products, although Zoho disclosed some roadmaps. More about understanding Zoho in a second.  The second main theme was customer success and testimonials. Instead of bombarding us with presenta

Relevance, reliability, responsibility are key for AI – the SAP way

The News A lot is going on in the SAPverse during October and the early days of November 2023. First, SAP conducted its CXLive event with CX-related announcements, then the company reported good Q3/2023 figures, a new version of its CX software that includes new generative AI capabilities got released and lastly, it executed its SAP TechEd event with a good number of AI-, BTP-, and ERP related announcements. As this is quite a lot, I covered the CX world in a previous post and will cover the TechEd related news in this post.  So, what is new at SAP TechEd ? For one, it is enough to fill a 17-page pre-event news guide that SAP sent out. SAP certainly is able to stack up the news for major events. I took the liberty to ask ChatGPT for a summary of the document, which I slightly edited afterwards. Here we are: AI and Development Environments: ·       SAP introduces SAP Build Code with generative AI, improving application development and testing, while new AI capabilities are integrate

Don't mess with Zoho - A Zohoday 2022 recap

After spending two days in Austin, TX, attending the ZohoDay 2022, it is time for a little recap of this interesting event.  We were 99 analysts and 24 customers and plenty of knowledgeable Zoho personnel. The incredible Sandra Lo and her team organized the event around open and transparent communication. So, there was plenty of access for us to customers and the Zoho team.  Which was very important, as already the keynote session by founder and CEO Sridhar Vembu was quite hardcore. Vembu talked about how strategy and culture need to be one, how culture needs to be the root of strategy, and how Zoho implements this. The Zoho strategy lies on three main pillars ·       Transnational localism, a unique concept that in its essence is about embedding a company into a local community by not only selling into it but also by investing into it. This investment is e.g., by offering high paying jobs in areas where these are scarce, by fostering local education, but also by own local sourcing in

The Generative AI Game of Thrones - Is OpenAI toast?

The News This has been an exciting weekend for the generative AI industry. On Friday November 17, OpenAI announced that the company fired its figurehead CEO Sam Altmann and appointed Chief Technology Officer Mira Murati as interims CEO in a surprise move. The press release states that Altmann “ was not consistently candid in his communications with the board .” Surprised was apparently not only Sam Altmann, but also the till then chairman of the board Greg Brockman who first stepped down from this position and subsequently quit OpenAI. Investors, notably Microsoft, found themselves blindsided, too – or flat footed depending on the individual point of view. Satya Nadella was compelled to state that Microsoft stays committed to the partnership with OpenAI in a blog post that got updated on November 19, 11:55 pm. All hell broke loose. Microsoft shares took a significant hit. A number of additional senior OpenAI personnel quit. Both, Altman and Brockman, voiced the idea of founding anoth