TechsterHub
  • Home
  • About Us
  • News
  • Techsterhub Radar
    • AI Radar
    • B2B Insights
    • Cloud Radar
    • Marketing Radar
    • Tech Radar
    • Workforce Solutions
  • Resource
  • Contact Us
No Result
View All Result
  • Home
  • About Us
  • News
  • Techsterhub Radar
    • AI Radar
    • B2B Insights
    • Cloud Radar
    • Marketing Radar
    • Tech Radar
    • Workforce Solutions
  • Resource
  • Contact Us
No Result
View All Result
Join Us
Home News

MetaTM Chatbot Model, is modified and run by the AI Accelerator GroqTM

by techsterhub bureau
March 16, 2023
MetaTM Chatbot Model, is modified and run by the AI Accelerator GroqTM
Share On LinkedinShare on TwitterShare on Telegram

This week, Groq, a pioneer in AI and ML systems, stated that it has adopted a new large language model (LLM), LLaMA chatbot technology from Meta, and a suggested replacement for ChatGPT to run on its platforms.

On February 24th, Facebookparent ®’s company, Meta, launched LLaMA, which chatbots may utilize to produce writing that sounds human. After downloading the model three days later, the Groq team had eight GroqChip inference processors running it on a commercial GroqNodeTM server within a few days. This is a development activity that frequently requires a bigger team of engineers to finish weeks or months later than Groq was able to do with only a small group from its compiler team.

Jonathan Ross, CEO and founder of Groq said, “This speed of development at Groq validates that our generalizable compiler and software-defined hardware approach is keeping up with the accelerating pace of LLM innovation–something traditional kernel-based approaches struggle with.”

While Meta researchers initially created LLaMA for NVIDIATM processors, Groq’s quick LLaMA bring-up is a very special and significant milestone. Engineers from Groq successfully ran a cutting-edge model on their technology to show off GroqChip as a ready-to-use replacement for existing technologies. Customers will require solutions that offer real time-to-production advantages, lowering developer complexity for quick iteration, as generative AI carves out a niche for itself in the market and transformers accelerate the speed of Technology development.

Bill Xing, Tech Lead Manager, ML Compiler at Groq said, “The complexity of computing platforms is permeating into user code and slowing down innovation. Groq is reversing this trend. Since we’re working on models that were trained on Nvidia GPUs, the first step of porting customer workloads to Groq is removing non-portable, vendor-specific code targeted for specific vendors and architectures. This might include replacing vendor-specific code calling kernels, removing manual parallelism or memory semantics, etc. The resulting code ends up looking a lot simpler and more elegant. Imagine not having to do all that ‘performance engineering’ in the first place to achieve stellar performance! This also helps by not locking a business down to a specific vendor.”

 

 

 

 

 

    Full Name*

    Business Email*

    Related Posts

    OpenAI warns AI browsers on prompt injection risks
    News

    OpenAI Warns AI Browsers May Never Be Fully Secure as Prompt Injection Persists

    January 5, 2026
    Tencent Japanese cloud deal accessing Nvidia AI chips
    News

    Tencent Uses Japanese Cloud Partnership to Access Banned Nvidia AI Chips

    January 5, 2026
    Google One Premium Plan Discount New Year Offer
    News

    Google One Launches Exclusive 50% Off Annual Premium Plans in New Year Offer

    January 5, 2026
    Please login to join discussion

    Recent Posts

    OpenAI warns AI browsers on prompt injection risks

    OpenAI Warns AI Browsers May Never Be Fully Secure as Prompt Injection Persists

    January 5, 2026
    Tencent Japanese cloud deal accessing Nvidia AI chips

    Tencent Uses Japanese Cloud Partnership to Access Banned Nvidia AI Chips

    January 5, 2026
    Google One Premium Plan Discount New Year Offer

    Google One Launches Exclusive 50% Off Annual Premium Plans in New Year Offer

    January 5, 2026
    AI Orchestrator Data Platform by McRae Tech in healthcare

    McRae Tech Unveils AI Orchestrator Data Platform to Transform Healthcare Data Management and AI Delivery

    January 5, 2026
    Microsoft Rust AI migration translating C and C++ code

    Microsoft Replacing C++ With Rust Using AI as Windows 11 Begins a Long-Term Security Rebuild

    January 5, 2026
    TechsterHub

    © 2026 TechsterHub. All Rights Reserved.

    Navigate Site

    • Privacy Policy
    • Cookie Policy
    • California Policy
    • Opt Out Form
    • Subscribe
    • Unsubscribe

    Follow Us

    • Login
    • Sign Up
    Forgot Password?
    Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.
    body::-webkit-scrollbar { width: 7px; } body::-webkit-scrollbar-track { border-radius: 10px; background: #f0f0f0; } body::-webkit-scrollbar-thumb { border-radius: 50px; background: #dfdbdb }
    No Result
    View All Result
    • Home
    • About Us
    • News
    • Techsterhub Radar
      • AI Radar
      • B2B Insights
      • Cloud Radar
      • Marketing Radar
      • Tech Radar
      • Workforce Solutions
    • Resources
    • Contact Us

    © 2026 TechsterHub. All Rights Reserved.

    Are you sure want to unlock this post?
    Unlock left : 0
    Are you sure want to cancel subscription?