Automating watsonx.ai foundation models with Langchain

Lukasz Cmielowski, PhD
3 min readSep 25, 2023

Written by: Lukasz Cmielowski, PhD

The task

Create a simple application that generates a random question and then answers it, based on a provided topic. To achieve this task we need to select two of the Large Language Models available on watsonx.ai. Next, we will use one of them to generate the question based on provided topic that the second model needs to answer.

Generate the question based on {topic} -> LLM1 -> {question}

Answer the question {question} -> LLM2 -> generated text (response)

Watsonx.ai python SDK

The python SDK for watsonx.ai offers a programming interface to foundation models (generate_text). Using the Model class one can easily switch between supported models and experiment with parameters and prompt strings for the generate_text method.

Setting the generate text parameters using python API:

from ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams
from ibm_watson_machine_learning.foundation_models.utils.enums import DecodingMethods

parameters = {
GenParams.DECODING_METHOD: DecodingMethods.SAMPLE,
GenParams.MAX_NEW_TOKENS: 100,
GenParams.TEMPERATURE: 0.5,
GenParams.RANDOM_SEED:1
}

Select the LLM from the list:

['FLAN_T5_XXL', 'FLAN_UL2', 'MT0_XXL', 'GPT_NEOX', 'MPT_7B_INSTRUCT2', 
'STARCODER', 'LLAMA_2_70B_CHAT', 'GRANITE_13B_INSTRUCT', 'GRANITE_13B_CHAT']
from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes

model_id_1 = ModelTypes.FLAN_UL2
model_id_2 = ModelTypes.FLAN_T5_XXL

Initialize the model inference objects.

from ibm_watson_machine_learning.foundation_models import Model

flan_ul2_model = Model(
model_id=model_id_1,
params=parameters,
credentials=credentials,
project_id=project_id)

flan_t5_model = Model(
model_id=model_id_2,
credentials=credentials,
project_id=project_id)

Prepare the test prompt string and generate the text:

flan_t5_model.generate_text('Which company produces mustang cars ?.')
'ford motor company'

To enter the Langchain world we use Custom LLM wrapper (WatsonxLLM). The wrapper allows us to use chain features with watsonx.ai foundation models.

from ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM

langchain_t5 = WatsonxLLM(model=flan_t5_model)
langchain_ul2 = WatsonxLLM(model=flan_ul2_model)

Now, we have entered the Langchain world!.

Put together all pieces

We will use the chain capabilities offered by Langchain to automate the sequence of calls. The simplest type of sequential chain is called a SimpleSequentialChain, in which each step has a single input and output and the output of one step serves as the input for the following step.

An object called PromptTemplate assists in generating prompts using a combination of user input, additional non-static data, and a fixed template string. In our case we would like to create two PromptTemplate objects which will be responsible for creating a random question and answering it.

from langchain import PromptTemplate

prompt_1 = PromptTemplate(
input_variables=["topic"],
template="Generate a random question about {topic}: Question: "
)
prompt_2 = PromptTemplate(
input_variables=["question"],
template="Answer the following question: {question}",
)

We would like to add functionality around language models using LLMChain. The prompt1_to_t5 chain formats the prompt template whose task is to generate a random question, passes the formatted string to LLM, and returns the LLM output.

from langchain.chains import LLMChain

prompt1_to_t5 = LLMChain(llm=langchain_t5, prompt=prompt_1)

The prompt2_to_ul2 chain formats the prompt template whose task is to answer the question we got from the prompt2_to_ul2 chain, passes the formatted string to LLM and returns the LLM output.

prompt2_to_ul2 = LLMChain(llm=langchain_ul2, prompt=prompt_2)

Here is the chain that runs prompt1_to_t5 and prompt2_to_ul2 in a sequence.

from langchain.chains import SimpleSequentialChain

qa = SimpleSequentialChain(chains=[prompt1_to_t5, prompt2_to_ul2], verbose=True)

Generate a random question and an answer that match a specific topic.

qa.run('IT')
> Entering new SimpleSequentialChain chain...
What is the main reason for the development of the Internet?
to exchange information and ideas

> Finished chain.

That’s all folks!

One more thing — you can find sample notebooks here.

--

--

Lukasz Cmielowski, PhD

Senior Technical Staff Member at IBM, responsible for AutoAI (AutoML).