You are currently viewing GPT-4: how to use the AI chatbot that puts ChatGPT to shame

GPT-4: how to use the AI chatbot that puts ChatGPT to shame

[]

People were in awe when ChatGPT came out, impressed by its natural language abilities as an AI chatbot. But when the highly-anticipated GPT-4 large language model came out, it blew the lid off what we thought was possible with AI, some calling it the early glimpses of AGI (artificial general intelligence).

The creator of the model, OpenAI, calls it the company’s “most advanced system, producing safer and more useful responses.” Here’s everything you need to know about it, including how to use it and what it can do.

Availability

A laptop opened to the ChatGPT website.Shutterstock

GPT-4 was officially announced on March 13, as was confirmed ahead of time by Microsoft, even though the exact day was unknown. As of now, however, it’s only available in the ChatGPT Plus paid subscription. The current free version of ChatGPT will still be based on GPT-3.5, which is less accurate and capable by comparison.

GPT-4 has also been made available as an API “for developers to build applications and services.” Some of the companies that have already integrated GPT-4 include Duolingo, Be My Eyes, Stripe, and Khan Academy. The first public demonstration of GPT-4 was also livestreamed on YouTube, showing off some of its new capabilities.

What’s new in GPT-4?

GPT-4 is a new language model created by OpenAI that can generate text that is similar to human speech. It advances the technology used by ChatGPT, which is currently based on GPT-3.5. GPT is the acronym for Generative Pre-trained Transformer, a deep learning technology that uses artificial neural networks to write like a human.

According to OpenAI, this next-generation language model is more advanced than ChatGPT in three key areas: creativity, visual input, and longer context. In terms of creativity, OpenAI says GPT-4 is much better at both creating and collaborating with users on creative projects. Examples of these include music, screenplays, technical writing, and even “learning a user’s writing style.”

The longer context plays into this as well. GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page. OpenAI says this can be helpful for the creation of long-form content, as well as “extended conversations.”

GPT-4 can also now receive images as a basis for interaction. In the example provided on the GPT-4 website, the chatbot is given an image of a few baking ingredients and is asked what can be made with them. It is not currently known if video can also be used in this same way.

Lastly, OpenAI also says GPT-4 is significantly safer to use than the previous generation. It can reportedly produce 40% more factual responses in OpenAI’s own internal testing, while also being 82% less likely to “respond to requests for disallowed content.”

OpenAI says it’s been trained with human feedback to make these strides, claiming to have worked with “over 50 experts for early feedback in domains including AI safety and security.”

Over the weeks since it launched, users have posted some of the amazing things they’ve done with it, including inventing new languages, detailing how to escape into the real world, and making complex animations for apps from scratch. As the first users have flocked to get their hands on it, we’re starting to learn what it’s capable of. One user apparently made GPT-4 create a working version of Pong in just sixty seconds, using a mix of HTML and JavaScript.

Where is visual input in GPT-4?

One of the most anticipated features in GPT-4 is visual input, which allows ChatGPT Plus to interact with images not just text. Being able to analyze images would be a huge boon to GPT-4, but the feature has been held back due to mitigation of safety challenges, according to OpenAI CEO Sam Altman.

You can get a taste of what visual input can do in Bing Chat, which has recently opened up the visual input feature for some users. It can also be tested out using a different application called MiniGPT-4. The open-source project was made by some PhD students, and while it’s a bit slow to process the images, it demonstrates the kinds of tasks you’ll be able to do with visual input once it’s officially rolled out to GPT-4 in ChatGPT Plus.

What are the best GPT-4 plugins?

Wolfram plugin performing calculations in ChatGPT.

Plugins are one of the best reasons to pay for ChatGPT Plus, opening up the possibilities to what the model can do. Two of the most powerful examples were created by OpenAI itself: the code interpreter and web browser plugins.

By using these plugins in ChatGPT Plus, you can greatly expand the capabilities of GPT-4. ChatGPT Code Interpreter can use Python in a persistent session — and can even handle uploads and downloads. The web browser plugin, on the other hand, gives GPT-4 access to the whole of the internet, allowing it to bypass the limitations of the model and fetch live information directly from the internet on your behalf.

Some of the other best GPT-4 plugins include Zapier, Wolfram, and Speak, all of which allow you to take the AI in new directions.

How to use GPT-4

Bing Chat shown on a laptop.Jacob Roach / Digital Trends

The main way to access GPT-4 right now is to upgrade to ChatGPT Plus. To jump up to the $20 paid subscription, just click on “Upgrade to Plus” in the sidebar in ChatGPT. Once you’ve entered your credit card information, you’ll be able to toggle between GPT-4 and older versions of the LLM. You can even double-check that you’re getting GPT-4 responses since they use a black logo instead of the green logo used for older models.

From there, using GPT-4 is identical to using ChatGPT Plus with GPT-3.5. It’s more capable than ChatGPT and allows you to do things like fine-tune a dataset to get tailored results that match your needs.

If you don’t want to pay, there are some other ways to get a taste of how powerful GPT-4 is. First off, you can try it out as part of Microsoft’s Bing Chat. Microsoft revealed that it’s been using GPT-4 in Bing Chat, which is completely free to use. Some GPT-4 features are missing from Bing Chat, however, and it’s clearly been combined with some of Microsoft’s own proprietary technology. But you’ll still have access to that expanded LLM (large language model) and the advanced intelligence that comes with it. It should be noted that while Bing Chat is free, it is limited to 15 chats per session and 150 sessions per day.

There are lots of other applications that are currently using GPT-4, too, such as the question-answering site, Quora.

What are GPT-4’s limitations?

While discussing the new capabilities of GPT-4, OpenAI also notes some of the limitations of the new language model. Like previous versions of GPT, OpenAI says the latest model still has problems with “social biases, hallucinations, and adversarial prompts.”

In other words, it’s not perfect. It’ll still get answers wrong, and there have been plenty of examples shown online that demonstrate its limitations. But OpenAI says these are all issues the company is working to address, and in general, GPT-4 is “less creative” with answers and therefore less likely to make up facts.

The other primary limitation is that the GPT-4 model was trained on internet data stopping in 2021. The web browser plugin helps get around this limitation, though.

Does Bing Chat use GPT-4?

Microsoft originally states that the new Bing, or Bing Chat, was more powerful than ChatGPT. Since OpenAI’s chat uses GPT-3.5, there was an implication at the time that Bing Chat could be using GPT-4. And now, Microsoft has confirmed that Bing Chat is, indeed, built on GPT-4.

Still, features such as visual input weren’t available on Bing Chat, so it’s not yet clear what exact features have been integrated and which have not.

Regardless, Bing Chat clearly has been upgraded with the ability to access current information via the internet, a huge improvement over the current version of ChatGPT, which can only draw from the training it received through 2021.

In addition to internet access, the AI model used for Bing Chat is much faster, something that is extremely important when taken out of the lab and added to a search engine.

An evolution, not a revolution?

We haven’t tried out GPT-4 in ChatGPT Plus yet ourselves, but it’s bound to be more impressive, building on the success of ChatGPT. In fact, if you’ve tried out the new Bing Chat, you’ve apparently already gotten a taste of it. Just don’t expect it to be something brand new.

Prior to the launch of GPT-4, OpenAI CEO Sam Altman said in a StrictlyVC interview posted by Connie Loizos on YouTube that “people are begging to be disappointed, and they will be.”

There, Altman acknowledged the potential of AGI (artificial general intelligence) to wreak havoc on world economies and expressed that a quick rollout of several small changes is better than a shocking advancement that provides little opportunity for the world to adapt to the changes. As impressive as GPT-4 seems, it’s certainly more of a careful evolution than a full-blown revolution.

Editors’ Recommendations

count) { list.classList.add(listModifier); showBtnText(btn, count); } } changeList(list, btn, itemsCount); changeBtn(btn, list, itemsCount); let contentChildElements = document.getElementById(“dt-post-content”); let headingArray = []; let firstHeadingY = 0; let stateMapHeadings = []; let currentFilledEl = -1; contentChildElements = contentChildElements ? Array.from(contentChildElements.querySelectorAll(“*”)) : []; for (let el of contentChildElements) { if (el.id && el.id === “dt-toc”) continue; if (el.tagName === “H2”) { headingArray.push(el); currentFilledEl++; stateMapHeadings[currentFilledEl] = [{ element: el, isVisible: false, }]; } else if (currentFilledEl !== -1) { stateMapHeadings[currentFilledEl].push({ element: el, isVisible: false, }); } } if (headingArray[0]){ firstHeadingY = headingArray[0].getBoundingClientRect().top + window.pageYOffset; } let observerItems = [].concat.apply([], stateMapHeadings); const options = { rootMargin: “0px 0px -50% 0px”, }; const links = document.querySelectorAll(“.b-toc__item a”); const observer = new IntersectionObserver(entries => { let leaveFirstItem = false; entries.forEach((entry) => { //search entry element in map and set status let elFound = false; for (let i = 0; i < stateMapHeadings.length; i++) { let arrayElements = stateMapHeadings[i]; for (let j = 0; j < arrayElements.length; j++) { if (arrayElements[j].element === entry.target) { arrayElements[j].isVisible = entry.isIntersecting; elFound = true; break; } } if (elFound) break; } if (!entry.isIntersecting && entry.target === headingArray[0] && window.pageYOffset { return !!listElements.find(item => item.isVisible); }); if (!itemToHighlight && !leaveFirstItem) return; itemToHighlight = itemToHighlight && itemToHighlight[0].element; let itemToHighlightId = “”; if (itemToHighlight) { itemToHighlightId = (itemToHighlight.firstElementChild && itemToHighlight.firstElementChild.getAttribute(“id”)) || itemToHighlight.getAttribute(“id”); } links.forEach(link => { if (itemToHighlight && link.getAttribute(“href”).substring(1) === itemToHighlightId) { link.parentElement.classList.add(“is-active”); } else { link.parentElement.classList.remove(“is-active”); } link.addEventListener(“click”, (e) => { if (link.href === window.location.href) { e.preventDefault(); } }); }); }, options); observerItems.forEach(item => { observer.observe(item.element); }); }); })(); ]]>

Source

Leave a Reply