IBM is throwing its hat in the ultra competitive generative AI ring
IBM — the global computing giant — is entering in the uber competitive generative AI race, with announcements of new generative AI features and models for its earlier announced watsonx data science platform.
Christened ‘Granite’ generative AI models, they work pretty much similar to how popular LLM models such as GPT-4 do, using inputs to analyse and summarise text content as output. “And just as granite is a strong, multipurpose material with many uses in construction and manufacturing, so we at IBM believe these Granite models will deliver enduring value to your business”, said IBM in an official press release without divulging more details on the model.
However, unlike other popular LLMs which have found popular use among general users, IBM is positioning its Granite models for enterprise and business customers — its strongest suit. Developed by IBM Research, the Granite models — Granite.13b.instruct and Granite.13b.chat — use a “Decoder” architecture, which similar to the ability of other large language models to predict the next word in a sequence.
“At IBM we are laser-focused on building models that are targeted for business. The Granite family of models is no different, and so we trained them on a variety of datasets — totaling 7 TB before pre-processing, 2.4 TB after pre-processing — to produce 1 trillion tokens, the collection of characters that has semantic meaning for a model”, the company further added.
Since the Granite family of models is targeted towards enterprise consumers, IBM says an additional focus has been put on security of such deployments. Every dataset that is used in training undergoes a defined governance, risk and compliance (GRC) review process.
The initial Granite models are just the beginning: more are planned in other languages and further IBM-trained models are also in preparation. IBM also recently announced that it is now offering Meta’s Llama 2-chat 70 billion parameter model to select clients for early access and plan to make it widely available later in September. In addition, IBM will host StarCoder, a large language model for code, including over 80+ programming languages, Git commits, GitHub issues and Jupyter notebooks.
In addition to the new models, IBM is also launching new complementary capabilities in the watsonx.ai studio. Coming later this month is the first iteration of the company’s Tuning Studio, which will include prompt tuning, an efficient, low-cost way for clients to adapt foundation models to their unique downstream tasks through training of models on their own trustworthy data. We will also launch our Synthetic Data Generator, which will assist users in creating artificial tabular data sets from custom data schemas or internal data sets.