A First Look: Streamlining Procurement through AI-powered Demand Intake

Published: 
April 18, 2024

Fairmarkit is using an AI-first approach to revolutionize the procurement lifecycle. If you ask any CPO what their procurement department does, you will get an answer that more or less looks like the cycle below:

Procurement lifecycle with demand identification, RFx creation, supplier discovery, vendor and response evaluation, and negotiations.

In this blog we will be focused on the first steps of the cycle, which we call “Demand Identification”.

What is the main challenge procurement teams face in capturing the request from the business user?

Procurement professionals find themselves faced with vague demand requests from business users, who grow increasingly impatient. As a business user, I might have several meetings, and back-and-forth emails with procurement, just to explain the details of my request. Procurement then has to have some back-and-forth with potential vendors. It’s a big game of telephone, and this is perceived as friction by the business user. After all, in our private lives, most of us are used to ordering our own goods and services. 

So what ends up happening is business users will either completely side-step procurement when they can, or involve them very late in the game when most of the decisions have already been made, and the only step left is payment approval. This is not great for the business since procurement is incentivized to do competitive sourcing and manage supplier risk, while the business user is incentivized to move their project forward fast. 

Some procurement departments try to get around this by building complex request forms for the business user. But these forms are pretty static, mostly one size fits all, and suffer from poor engagement. They are more of a ticketing system to process requests, but they don’t solve the core problem of inefficient back-and-forth.  

How does AI help in this case?

Robot representing AI with a can of paint.

AI can help in a few ways.

Imagine if the AI understood my fence repair service request. It asked me contextually relevant questions like fence dimensions, material, and desired paint color. It then suggested details like anticipated cost,  and gave me insight into typical turnaround time for such requests. Finally it helped route my request to the proper procurement person, ensuring compliance with any existing procurement policies. 

None of the steps outlined above are net-new, but they usually happen very inefficiently, which is the source of friction.  The idea is to frontload as much of it as possible by allowing the business user to interact with the AI. This level of customization is not achievable with a static intake form. 

As a business user, my first (and possibly only) communication with procurement would then be very streamlined. It’s possible I might even be able to skip most of the initial back-and-forth steps, and get to direct communication with the fence repair vendor. Since the request to the vendor would already have the enriched information that the AI collected, it would also reduce inefficient back-and-forth and speed up the quoting process as well. 

We’ve heard from some clients that requests could typically take 240 person-hours on average from start to finish. AI can make a meaningful dent in that.

It seems like this could work for simple requests like fence repair, but what about more complex requests, like a larger construction project? 

AI can still help in that case. These strategic requests usually require an RFP, which is an even more involved request, and time-consuming to build. We’ll cover that in upcoming posts in this series 🙂

Ok, going back to the example about the fence repair request, can you unpack how exactly AI will work here to understand this request? 

Imagine a three-layered cake, where each layer represents a crucial component in the process of demand intake. The bottom is the “Data”, the middle is the “LLM/AI”, and the top layer is the “Application”. Let’s walk through this cake using the fence repair request.

Cake with 3 layers: application layer (Fairmarkit), LLMs (algorithms creating unique AI outputs), and compute layer (data).

Application Layer

Let’s start with the “Application” layer. This is where the business user enters the requests and interacts with the system. So a user would type in a request in a natural language. For example “Looking for fence repair service for our parking lot located at 123 Main street. We need approximately 20 feet of fence.” 

The application layer orchestrates the flow of information between the business user and the LLM layer underneath it. This information could be category prediction (e.g. maintenance and repair services). It would also include any extracted information (e.g. address = 123 Main street). The application layer would dynamically present back to the user relevant followup questions (e.g. what kind of fence? Chain-link? Wooden?). Other kinds of information presented would be inferred information (e.g. approximate budget for this request and location would be ~$5000). 

The application layer shapes the user experience.  At Fairmarkit we chose to offer a dynamic, yet guided experience. A lot of companies instead opt for a conversational chatbot experience. AI chatbots do not always provide the ideal UX for integrating the LLM/AI layer. They also make it much harder to control inevitable hallucinations because they are so unbounded. 

LLM/AI Layer

This is the layer responsible for the information processing. Usually 4 possible things can happen in this layer: (i) prediction, (ii) extraction, (iii) classification, and (iv) generation. In the case of the fence request above, all 4 things happened. The LLM classified the request into the appropriate category, it extracted the address, it predicted a budget, and it generated contextual followup questions for the user to answer. 

In terms of actual LLM choices, there is a plethora of open-source (e.g. Llama2) and closed-source (e.g. GPT) choices. It’s always wise to not be locked into any one vendor. This is certainly the case at Fairmarkit, and we’re able to hot-swap one LLM for another as needed. 

Aside from the standard security concerns of open-source vs closed-source, the other deciding factors are model latency and model accuracy. As a general rule of thumb, larger models are slower, but latency ultimately depends on how fast your product needs to react. As far as model accuracy goes, Huggingface maintains an LLM leaderboard for accuracy, which changes weekly if not daily. Note this is accuracy against standardized benchmark datasets, and also does not include closed-source models.  

Data Layer

If it was the case in the last decade that data was the new oil, it’s even more true today in the world of AI/LLMs. LLMs already come pre-trained on large amounts of data from the internet, but this general knowledge might not be sufficient or specific enough for the application layer. 

For example, the Massachusetts Bay Transportation Authority (MBTA) makes its procurement policies open to the public. LLMs like ChatGPT have general knowledge about MBTA, but will not be able to reference the Employee Code of Conduct buried in section 1.2.1 with any specificity. Also suppose that the fence request mentioned required classification in MBTA’s own specific spend taxonomy, again an off-the-shelf LLM will not be able to do that. Finally, suppose 30 other historical fence requests occurred at MBTA last year, and have details about what information was important to collect. This privately owned data would definitely not be accessible to the LLM off the bat. 

So how can these various data sources (procurement policies, MBTA taxonomy, and historical fence requests) be made available to the LLM layer? The simplest approach is to pass this information to the LLM as part of the prompt. This is known as in-context learning, it amounts to guiding the LLM with focused information. For example one could pass the entire MBTA taxonomy into the prompt for category classification. This can be limited by the maximum size of the prompt. It certainly wouldn’t work for the procurement policy which runs 100s of pages. Not to mention it would slow things down considerably.

Another approach is to retrieve the relevant pieces of data and then pass that to the LLM. This approach is called retrieval-augmented generation (RAG). It’s a more surgical version of the in-context learning.  In our next post, we’ll go deeper into RAG and how it can be the foundation for “document intelligence”.

Makes sense. Does Fairmarkit have an AI offering for demand intake? 

 We do! Our Demand Intake feature is powered by our Generative AI tool called KIT. This feature is revolutionizing and streamlining the way that our customers tackle their demand intake process. You can see how it works in depth here.

Conclusions

The way AI has started to revolutionize how procurement professionals go about their demand intake is only a small piece of AI’s ability to change the way we buy and sell. AI can revolutionize the procurement process and really bring autonomous sourcing to companies of all sizes. Here at Fairmarkit we are looking to do just that. At our core we are looking at developing key features that help our users act more efficiently, and are excited to share with you what we have built and what we are looking to build as the space continuous to change and evolve.

Expert procurement and supply-chain tips sent straight to your inbox.