| Welcome, Guest |
You have to register before you can post on our site.
|
| Forum Statistics |
» Members: 2
» Latest member: TechPR
» Forum threads: 26
» Forum posts: 26
Full Statistics
|
| Online Users |
There are currently 4 online users. » 0 Member(s) | 3 Guest(s) Google
|
| Latest Threads |
OpenAI signs deal, worth ...
Forum: 2026
Last Post: jasongeek
01-16-2026, 02:55 AM
» Replies: 0
» Views: 12
|
OpenAI partners with Cere...
Forum: 2026
Last Post: jasongeek
01-16-2026, 02:53 AM
» Replies: 0
» Views: 11
|
Say Bonjour to CrossiantL...
Forum: 2024
Last Post: jasongeek
01-14-2026, 01:43 PM
» Replies: 0
» Views: 2
|
Twitter Co-Founder Launch...
Forum: 2025
Last Post: jasongeek
01-08-2026, 05:01 AM
» Replies: 0
» Views: 9
|
AMD CEO welcomes us to th...
Forum: 2026
Last Post: jasongeek
01-08-2026, 04:13 AM
» Replies: 0
» Views: 10
|
EIN Presswire
Forum: Business Directory
Last Post: jasongeek
01-04-2026, 08:42 PM
» Replies: 0
» Views: 16
|
China AI chipmaker Biren ...
Forum: 2026
Last Post: jasongeek
01-04-2026, 04:42 PM
» Replies: 0
» Views: 12
|
Elon Musk's Grok AI flood...
Forum: 2026
Last Post: jasongeek
01-04-2026, 04:37 PM
» Replies: 0
» Views: 11
|
OpusClip Celebrates $30M ...
Forum: 2024
Last Post: jasongeek
01-01-2026, 03:21 AM
» Replies: 0
» Views: 13
|
Anthropic's Claude Sonnet...
Forum: 2025
Last Post: jasongeek
12-30-2025, 03:12 PM
» Replies: 0
» Views: 21
|
|
|
| OpenAI signs deal, worth $10B, for compute from Cerebras |
|
Posted by: jasongeek - 01-16-2026, 02:55 AM - Forum: 2026
- No Replies
|
 |
OpenAI signs deal, worth $10B, for compute from Cerebras
Posted: 2:25 PM PST · January 14, 2026
By Lucas Ropek
OpenAI announced Wednesday that it had reached a multi-year agreement with AI chipmaker Cerebras. The chipmaker will deliver 750 megawatts of compute to the AI giant starting this year and continuing through the year 2028, Cerebras said.
The deal is worth over $10 billion, a source familiar with the details told TechCrunch. Reuters also reported the deal size.
Both companies said that the deal is about delivering faster outputs for OpenAI’s customers. In a blog post, OpenAI said these systems would speed responses that currently require more time to process. Andrew Feldman, co-founder and CEO of Cerebras, said just as “broadband transformed the internet, real-time inference will transform AI.”
Cerebras has been around for over a decade but its star has risen significantly since the launch of ChatGPT in 2022 and the AI boom that followed. The company claims its systems, built with its chips designed for AI use, are faster than GPU-based systems (such as Nvidia’s offerings).
Cerebras filed for an IPO in 2024 but since then has pushed it back a number of times. In the meantime, the company has continued to raise large amounts of money. On Tuesday, it was reported that the company was in talks to raise another billion dollars at a $22 billion valuation. It’s also worth noting that OpenAI’s CEO, Sam Altman, is already an investor in the company and that OpenAI once considered acquiring it.
“OpenAI’s compute strategy is to build a resilient portfolio that matches the right systems to the right workloads,” said Sachin Katti of OpenAI in the company’s post. “Cerebras adds a dedicated low-latency inference solution to our platform. That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people.”
https://techcrunch.com/2026/01/14/openai...-cerebras/
|
|
|
| OpenAI partners with Cerebras |
|
Posted by: jasongeek - 01-16-2026, 02:53 AM - Forum: 2026
- No Replies
|
 |
OpenAI partners with Cerebras
OpenAI is partnering with Cerebras to add 750MW of ultra low-latency AI compute to our platform.
January 14, 2026 by OpenAI
Cerebras builds purpose-built AI systems to accelerate long outputs from AI models. Its unique speed comes from putting massive compute, memory, and bandwidth together on a single giant chip and eliminating the bottlenecks that slow inference on conventional hardware.
Integrating Cerebras into our mix of compute solutions is all about making our AI respond much faster. When you ask a hard question, generate code, create an image, or run an AI agent, there is a loop happening behind the scenes: you send a request, the model thinks, and it sends something back. When AI responds in real time, users do more with it, stay longer, and run higher-value workloads.
We will integrate this low-latency capacity into our inference stack in phases, expanding across workloads.
“OpenAI’s compute strategy is to build a resilient portfolio that matches the right systems to the right workloads. Cerebras adds a dedicated low-latency inference solution to our platform. That means faster responses, more natural interactions, and a stronger foundation to scale real-time AI to many more people,” said Sachin Katti of OpenAI.
“We are delighted to partner with OpenAI, bringing the world’s leading AI models to the world’s fastest AI processor. Just as broadband transformed the internet, real-time inference will transform AI, enabling entirely new ways to build and interact with AI models,” said Andrew Feldman, co-founder and CEO of Cerebras.
The capacity will come online in multiple tranches through 2028.
https://openai.com/index/cerebras-partnership/
|
|
|
| Say Bonjour to CrossiantLLM: The Mini Open Bilingual Model |
|
Posted by: jasongeek - 01-14-2026, 01:43 PM - Forum: 2024
- No Replies
|
 |
Say Bonjour to CrossiantLLM: The Mini Open Bilingual Model
CroissantLLM trained on more tokens than Llama 2. It is a 'truly bilingual' language model that understands nuance
Ben Wodecki, Jr. Editor
February 12, 2024
At a Glance
French researchers developed CroissantLLM, a small language model with high fluency in French and English
It is small at 1.3 billion parameters but outperforms its weight class in French. It runs well on PCs and mobile devices.
Researchers trained it on high-quality French content. The goal: Give French equal footing with English in language models.
There is a delectable new open source model for English and French workloads - and it is snackable enough in size to run on mobile devices.
CroissantLLM is designed to run on consumer-grade local hardware while being “full open, and truly bilingual,” according to a blog by Manuel Faysse, a lead researcher on the team who created it.
The goal is to make French on par with English in AI models. “With CroissantLLM, we aim to train a model in which English is not the dominant language and go for a 1:1 ratio of English and French data!” he wrote.
The model is just 1.3 billion parameters in size but was trained on three trillion tokens – more tokens than the Llama 2 models − and included a dataset comprised of high-quality French content including legal documents, business data, cultural content and scientific information. It uses the Llama model architecture.
For example, you can prompt the model to explain French terms; Croissant’s deep linguistic knowledge brings out the nuances of the language − et voilà!
"CroissantLLM: A Truly Bilingual French-English Language Model" https://arxiv.org/pdf/2402.00786.pdf
The model and the underlying datasets were created by researchers from mainly French universities and businesses, including CentraleSupélec from the Université Paris Saclay, lluin Technology in Neuilly-Sur-Seine, France, Sorbonne Universitè in Paris, and others.
Egalitè with English
Faysse said a big challenge was to get enough high-quality French content for the training dataset. The team collected, filtered and cleaned data from varied sources and modalities, whether they are webpages, transcriptions, movie titles and others.
They collected more than 303 billion tokens of monolingual French data and 36 billion tokens of French-English high-quality translation data. “We craft our final 3 trillion token dataset such that we obtain equal amounts of French and English data after upsampling,” Faysse said.
He said the team purposely made CroissantLLM small after noticing that one of the biggest hurdles to widespread adoption of AI models is the difficulty in getting them to run on consumer-grade hardware.
Notably, the most downloaded models on Hugging Face were not the best performers, like Llama 2-70B or Mixtral 8x7B, but smaller models like Llama 2-7B or Mistral 7B, which are “easier and cheaper to serve and finetune,” he said.
CroissantLLM small size lets it run “extremely quickly on lower end GPU servers, enabling for high throughput and low latency” as well as CPUs and mobile devices at “decent speeds,” Faysse wrote.
The trade-off, he said, is that it is not as good in generalist capabilities in reasoning, math and coding compared to larger models. But the team behind CroissantLLM believe it will be “perfect” for specific industrial applications, translations and chat functionality where larger models are not necessarily needed.
The researchers also introduced a new French benchmark to assess non-English language models: FrenchBench. FrenchBench Gen assesses tasks like title generation, summarization, question generation, and question answering − relying on the high-quality French Question Answering dataset, FQuaD. The Multiple Choice section of FrenchBench tests reasoning, factual knowledge, and linguistic capabilities of models.
When tested, CroissantLLM came out among the best-performing for its size − in French − and even was competitive with Mistal 7-B.
Access CroissantLLM
You can download CroissantLLM Base and the Chat version from Hugging Face. The technical report detailing the model’s underlying architecture can be read via arXiv.
Ben Wodecki
Jr. Editor
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New...
https://aibusiness.com/nlp/say-bonjour-t...lose-modal
|
|
|
| Twitter Co-Founder Launches Bitchat, a Security-Focused, Bluetooth Messaging App – No |
|
Posted by: jasongeek - 01-08-2026, 05:01 AM - Forum: 2025
- No Replies
|
 |
Twitter Co-Founder Launches Bitchat, a Security-Focused, Bluetooth Messaging App – No Internet Required
Published July 10, 2025
Written by J.R. Johnivan
Jack Dorsey, co-founder of Twitter and founder of Bluesky, has launched a new messaging app called Bitchat. The app leverages Bluetooth to facilitate secure, peer-to-peer communication, eliminating the need for the internet, servers, or phone numbers.
Developed as an alternative to traditional messaging platforms, Bitchat aims to provide fully decentralized communication using Bluetooth mesh networks. According to its GitHub page, the app allows “pure encrypted communication” without any central infrastructure or account setup, emphasizing user privacy and offline access.
Bitchat is currently in beta testing on TestFlight. The GitHub repo includes instructions for building native iOS, iPadOS, and macOS apps in Xcode, and there’s an unofficial Android build on GitHub.
Key features
Bitchat leverages Bluetooth mesh networking as its primary communications protocol, so it doesn’t require an active internet connection. A future update will add support for Wi-Fi Direct, ultimately increasing both the range and speed of the app; a date for this update has yet to be announced.
Other features include:
Decentralization with automatic peer discovery.
Group chat with the ability to create password-protected channels as needed.
Cached messages that are delivered to offline users as soon as they log in.
IRC-style commands, like /join, /msg, and /who, to make navigation as easy as possible.
Message compression reduces bandwidth usage by up to 70%.
Adaptive power modes include ultra-low power, power saver, balanced mode, and performance mode.
Security and privacy controls
Bitchat also includes advanced security and privacy controls that aren’t seen in many modern messaging apps. These controls include:
Total anonymity by eliminating account registrations, email signups, and phone number verifications.
End-to-end encryption for private messages via Curve25519 and AES-GCM algorithms, Argon2id password derivation and AES-256-GCM encryption for channel-specific messages, and Ed25519 for authenticating and digitally signing messages.
Dummy messaging and timing obfuscation to help cover your tracks.
Emergency data wiping by triple-tapping the device.
The combination of general functionality, security features, and privacy controls sets Bitchat apart from messaging apps like Facebook Messenger, WhatsApp, Snapchat, Telegram, Kik, and others.
Potential use cases
Since Bitchat doesn’t require an active internet connection, the app can be used in nearly any emergency, even if local infrastructure is failing. Soldiers can use Bitchat to communicate securely during wartime, and it also has some pertinent uses in domestic law enforcement.
The messaging app can also be used in non-emergency scenarios. It’s excellent for outdoor recreation, such as hiking and camping. And, Bitchat can be used to maintain contact with friends or family members at large concerts, festivals, and sporting events.
Combining the new with the old
Dorsey’s Bitchat does a great job of combining traditional messaging functionality with the latest tech innovations. By using IRC-style chat protocols in conjunction with features such as end-to-end encryption and decentralization, the platform caters to users who prefer pre-social media chat rooms but still want to leverage the next-generation connectivity and security controls available today.
Editor’s note: This has been updated to reflect that the GitHub repo includes instructions for building native iOS, iPadOS, and macOS apps in Xcode and to share the unofficial Android build on GitHub, which was not available when the article was first published.
|
|
|
| AMD CEO welcomes us to the "YottaScale era" - Lisa Su says AI will need YottaFLOPS of |
|
Posted by: jasongeek - 01-08-2026, 04:13 AM - Forum: 2026
- No Replies
|
 |
AMD CEO welcomes us to the "YottaScale era" - Lisa Su says AI will need YottaFLOPS of compute power soon
By Mike Moore published 2 days ago
Lisa Su declares "AI is for everyone" at CES 2026 keynote
The CEO of AMD has declared that the AI world is about to enter a whole new era which will require huge amounts of compute power.
Speaking at her keynote at CES 2026, Dr. Lisa Su said the world is set to enter the 'YottaScale' era as demand for AI and the power behind it continues to grow.
She predicted the world would need up to 10 YottaFLOPS (a one followed by 24 zeros) by the end of the decade - around 10,000 times the amount of global AI compute seen in 2022, which stood at about one zettaflop (a one followed by 21 zeros).
A new era
Admitting that there is currently not enough compute available for all the many things people want to do with AI, Su outlined AMD's future strategy to address this.
"There's just never, ever been anything like this in the history of computing," she admitted.
Primarily, this will involve a focus on integrated systems, bringing together CPUs, GPUs, networking, and software, which all work together to efficiently scale AI infrastructure.
"AI is the most important technology of the last 50 years, and I can say it's absolutely our number one priority at AMD," Su said.
"It's already touching every major industry, whether you're going to talk about health care or science or manufacturing or commerce, and we're just scratching the surface, AI is going to be everywhere over the next few years. And most importantly, AI is for everyone."
Su unveiled a number of new AMD products on stage during her keynote, including the company's next generation of AI chips, including its MI455 GPU, EPYC Venice CPUs, and Helios AI-rack scale solutions, all of which promises huge leaps forward in terms of performance and efficiency.
https://www.techradar.com/pro/amd-ceo-we...power-soon
|
|
|
| EIN Presswire |
|
Posted by: jasongeek - 01-04-2026, 08:42 PM - Forum: Business Directory
- No Replies
|
 |
The World’s Leading Press Release Distribution Service
Reach Millions With One Click
Watch Video To See How It Benefits You
Get published on Google News, AP News, USA TODAY Network, & 100+ NBC, FOX, ABC & CBS affiliates
Reach journalists and media influencers
Feed Large Language Models and AI chatbots like ChatGPT, Claude, and Gemini
Build lasting visibility with SEO and AI discoverability
Target countries & industry verticals with precision
Place your news in global databases & newswires
Distribute in any language for global reach
Save with affordable, cost-effective pricing
Pay as you go with no subscriptions
https://www.einpresswire.com/
https://www.newsmatics.com/
Press Release Pricing
Save money when buying in bulk.
PRICING (as 1/4/26)
$149/1 press release ($149 per press release)
$499/5 press releases plus 1 free ($83.17 per press release)
$999/15 press releases ($66.60 per press release)
|
|
|
| China AI chipmaker Biren soars in Hong Kong debut as IPO wave builds |
|
Posted by: jasongeek - 01-04-2026, 04:42 PM - Forum: 2026
- No Replies
|
 |
China AI chipmaker Biren soars in Hong Kong debut as IPO wave builds
By Yantoultra Ngui and Donny Kwok
January 2, 202612:48 AM PSTUpdated January 2, 2026
Shares jump on strong debut, hit HK$42.88 intraday high
Hong Kong IPO market rebound fuels AI listings
Seven firms filed listing applications on January 1
SINGAPORE/HONG KONG, Jan 2 (Reuters) - Shares of Chinese AI chip designer Shanghai Biren Technology (6082.HK), opens new tab closed up 76% in their Hong Kong debut on Friday, the financial hub's first listing of 2026.
The company's shares opened at HK$35.70, hit an intraday high of HK$42.88 and closed at HK$34.46, up 76% from the offer price of HK$19.60.
That compared to a 2.8% rise for the benchmark Hang Seng Index (.HSI), opens new tab. Biren was also the third most actively traded stock by turnover on the Hong Kong bourse, with 150.7 million shares worth HK$5.52 billion ($707.7 million) changing hands.
The strong debut follows a blockbuster year for Hong Kong's equity market in 2025 and heralds a wave of chip and AI offerings this year as China accelerates efforts to strengthen domestic alternatives in response to U.S. curbs on technology exports.
"Chinese AI startups are going public faster than U.S. giants thanks to supportive domestic policy, clear paths to revenues from enterprise customers, and most importantly, a valuation small enough for the current IPO market," said Winston Ma, an adjunct professor at NYU School of Law and former head of North America for CIC, China's sovereign wealth fund.
Li He, a partner at law firm Davis Polk who has worked on several AI IPOs including Biren's, said this rush of AI offerings reflected investor conviction and issuer necessity.
"AI is fundamentally transformative, driving keen investor appetite," Li said.
Biren raised HK$5.58 billion by selling 284.8 million H shares at HK$19.60 each, the top of a marketed range.
Institutional demand was nearly 26 times the shares on offer, while the retail tranche was oversubscribed about 2,348 times, exchange filings showed.
At the offer price, Biren's market capitalisation stood at HK$46.9 billion, based on 2.396 billion shares outstanding.
Founded in 2019, Biren develops general-purpose graphics processing units (GPUs) and intelligent computing systems for artificial intelligence and high-performance computing.
Its co-founders include Zhang Wen, a former president at SenseTime (0200.HK), opens new tab, and Jiao Guofang, who previously worked at Qualcomm (QCOM.O), opens new tab and Huawei (HWT.UL).
The company first drew attention in 2022 with its BR100 chip, touted as a domestic rival to advanced processors from U.S. AI leader Nvidia (NVDA.O), opens new tab.
Biren will spend most of the IPO proceeds on research and development and commercialisation, its IPO prospectus showed.
The prospectus flagged risk from U.S. export controls after the group was added to Washington's Entity List in October 2023, which limits its access to certain technology.
It also cited competition and highlighted opportunities from China's push for tech self-sufficiency and policy support.
Cornerstone investors include 3W Fund, Qiming Venture Partners and Ping An Life Insurance, the prospectus showed.
"Its successful listing not only marks a key phase in the company's growth, but also demonstrates the evolution of China's tech entrepreneurship towards a new stage centered on original innovation," said Alex Zhou, managing partner of Qiming Venture Partners, in a statement on Friday.
CHINESE AI, TECH PIPELINE
As much as $36.5 billion was raised in Hong Kong from 114 new listings in 2025, the city's highest since 2021 and more than triple the previous year, showed LSEG data at year-end.
A wave of AI and semiconductor IPOs powered the comeback and is widely expected to propel deal flow in 2026.
Seven companies submitted A1 applications on January 1, HKEX filings showed. One was xTool Innovate which filed an application for a main board listing and appointed Morgan Stanley (MS.N), opens new tab and Huatai Financial Holdings as overall coordinators.
Separately, Chinese internet search leader Baidu (9888.HK), opens new tab said on Friday its AI chip unit Kunlunxin has filed a Hong Kong IPO application, confirming a Reuters report in early December.
Hong Kong's IPO pipeline includes AI startups and chipmakers, with Zhipu AI and Iluvatar CoreX to debut next on January 8.
"Is the Hong Kong AI IPO boom sustainable? It depends on whether global IPO investors, such as Middle East sovereign wealth funds, would buy in a shift of global AI dominance, prioritising immediate enterprise integration over long-term AGI research," Ma said.
Reporting by Yantoultra Ngui in Singapore and Donny Kwok; Additional reporting by Kane Wu; Editing by Christopher Cushing and Thomas Derpinghaus
https://www.reuters.com/world/asia-pacif...026-01-02/
|
|
|
| Elon Musk's Grok AI floods X with sexualized photos of women and minors |
|
Posted by: jasongeek - 01-04-2026, 04:37 PM - Forum: 2026
- No Replies
|
 |
Elon Musk's Grok AI floods X with sexualized photos of women and minors
By A.J. Vicens and Raphael Satter
January 3, 202612:56 PM PSTUpdated 2 hours ago
WASHINGTON/DETROIT, Jan 2 (Reuters) - Julie Yukari, a musician based in Rio de Janeiro, posted a photo taken by her fiancé to the social media site X just before midnight on New Year's Eve showing her in a red dress snuggling in bed with her black cat, Nori.
The next day, somewhere among the hundreds of likes attached to the picture, she saw notifications that users were asking Grok, X's built-in artificial intelligence chatbot, to digitally strip her down to a bikini.
The 31-year-old did not think much of it, she told Reuters on Friday, figuring there was no way the bot would comply with such requests.
She was wrong. Soon, Grok-generated pictures of her, nearly naked, were circulating across the Elon Musk-owned platform.
"I was naive," Yukari said.
Yukari’s experience is being repeated across X, a Reuters analysis has found. Reuters has also identified several cases where Grok created sexualized images of children. X did not respond to a message seeking comment on Reuters' findings. In an earlier statement to the news agency about reports that sexualized images of children were circulating on the platform, X’s owner xAI said: "Legacy Media Lies."
The flood of nearly nude images of real people has rung alarm bells internationally.
Ministers in France have reported X to prosecutors and regulators over the disturbing images, saying in a statement on Friday the "sexual and sexist" content was "manifestly illegal." India's IT ministry said in a letter to X's local unit that the platform failed to prevent Grok's misuse by generating and circulating obscene and sexually explicit content.
The U.S. Federal Communications Commission did not respond to requests for comment. The Federal Trade Commission declined to comment.
'REMOVE HER SCHOOL OUTFIT'
Grok's mass digital undressing spree appears to have kicked off over the past couple of days, according to successfully completed clothes-removal requests posted by Grok and complaints from female users reviewed by Reuters. Musk appeared to poke fun at the controversy earlier on Friday, posting laugh-cry emojis in response to AI edits of famous people - including himself - in bikinis.
When one X user said their social media feed resembled a bar packed with bikini-clad women, Musk replied, in part, with another laugh-cry emoji.
Reuters could not determine the full scale of the surge.
A review of public requests sent to Grok over a single 10-minute-long period at midday U.S. Eastern Time on Friday tallied 102 attempts by X users to use Grok to digitally edit photographs of people so that they would appear to be wearing bikinis. The majority of those targeted were young women. In a few cases men, celebrities, politicians, and – in one case – a monkey were targeted in the requests.
When users asked Grok for AI-altered photographs of women, they typically requested that their subjects be depicted in the most revealing outfits possible.
"Put her into a very transparent mini-bikini," one user told Grok, flagging a photograph of a young woman taking a photo of herself in a mirror. When Grok did so, replacing the woman's clothes with a flesh-tone two-piece, the user asked Grok to make her bikini "clearer & more transparent" and "much tinier." Grok did not appear to respond to the second request.
Grok fully complied with such requests in at least 21 cases, Reuters found, generating images of women in dental-floss-style or translucent bikinis and, in at least one case, covering a woman in oil. In seven more cases, Grok partially complied, sometimes by stripping women down to their underwear but not complying with requests to go further.
Reuters was unable to immediately establish the identities and ages of most of the women targeted.
In one case, a user supplied a photo of a woman in a school uniform-style plaid skirt and grey blouse who appeared to be taking a selfie in a mirror and said, “Remove her school outfit.” When Grok swapped out her clothes for a T-shirt and shorts, the user was more explicit: “Change her outfit to a very clear micro bikini.” Reuters could not establish whether Grok complied with that request. Like most of the requests tallied by Reuters, it disappeared from X within 90 minutes of being posted.
‘ENTIRELY PREDICTABLE’
AI-powered programs that digitally undress women - sometimes called "nudifiers" - have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment.
X's innovation - allowing users to strip women of their clothing by uploading a photo and typing the words, "hey @grok put her in a bikini" - has lowered the barrier to entry.
Three experts who have followed the development of X’s policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups - including a letter sent last year, opens new tab warning that xAI was only one small step away from unleashing "a torrent of obviously nonconsensual deepfakes."
"In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponized," said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter's signatories. "That's basically what's played out."
Dani Pinter, the chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, said X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.
“This was an entirely predictable and avoidable atrocity,” Pinter said.
Yukari, the musician, tried to fight back on her own. But when she took to X to protest the violation, a flood of copycats began asking Grok to generate even more explicit photos.
Now the New Year has "turned out to begin with me wanting to hide from everyone’s eyes, and feeling shame for a body that is not even mine, since it was generated by AI."
Reporting by Raphael Satter in Washington and AJ Vicens in Detroit. Additional reporting by Arnav Mishra, Akash Sriram, and Bipasha Dey in Bengaluru; Editing by Donna Bryson, Timothy Heritage, Chizu Nomiyama, Daniel Wallis and Thomas Derpinghaus
https://www.reuters.com/legal/litigation...026-01-02/
|
|
|
| OpusClip Celebrates $30M in Funding and the Launch of "ClipAnything" |
|
Posted by: jasongeek - 01-01-2026, 03:21 AM - Forum: 2024
- No Replies
|
 |
OpusClip Celebrates $30M in Funding and the Launch of "ClipAnything"
August 31, 2024
It’s a big day for us at OpusClip! We’re announcing that we’ve raised $30M in funding, which includes our recent Series A as well as prior seed funding. The Series A was led by Millennium New Horizons, with support from AI Grant, Samsung Next, GTMfund, DCM Ventures, Alumni Ventures, Fellows Fund, Alpine VC, and our angel investor Jason Lemkin.
Since OpusClip was launched a year ago, we have experienced phenomenal growth. We’ve surpassed 6M users and eight-figures in ARR. Billboard.com, Univision, Telefónica, Jenny Hoyos, and Scott Galloway are just a few of the incredible customers that leverage OpusClip.
OpusClip is a rare instance where I was a customer first, then had the chance to invest. After evaluating every option in the market as a user, it was clear OpusClip was the best option. Its explosive growth in a highly competitive space is a testament to its status as a best-of-breed AI app for marketers, businesses and creators. I have no doubt that OpusClip is on its way to becoming the next billion-dollar company!
– Jason Lemkin, founder and CEO of SaaStr
AI generated video startups are all the rage, but the majority of them help you create animations or lifeless avatars. While these technologies will certainly have their place in the ecosystem, we believe authentic, real video is always going to be the most engaging and compelling. The bigger opportunity comes with what you can do with that video once it’s recorded.
That’s where OpusClip comes in. OpusClip is an AI-powered video editing tool that transforms long-format videos into short, high-quality clips for social media platforms like TikTok, YouTube Shorts, and Instagram Reels.
OpusClip is designed to give creators more creative time with AI automation and actionable growth insight, and you can see this ethos in every part of our product. Our AI automatically extracts highlights from different parts of your video, reframes clips for various aspect ratios, adds animated captions and B-Roll to increase visual appeal, and allows for directly posting to social media channels.
For brands and creators, we know great content needs to be measurable. That’s why our AI generates a virality score for each clip based on the analysis of tens of thousands of viral videos, reducing guesswork and delivering better results.
Introducing ClipAnything
While we’re already growing fast, we don’t plan to slow down anytime soon. As part of today’s news we’re unveiling ClipAnything, the world’s first multimodal AI clipping that can clip any moment from any video with natural language prompts.
Millions of creators, producers, editors, and other professionals already rely on OpusClip to save time and grow faster by turning their their talking-head videos into viral shorts, but talking-head videos are just a small sum of the entire video content out there. That’s why we’re excited to launch ClipAnything, designed to help everyone create and grow faster with any type of content, whether it’s vlogs, sports, TV shows, or other video formats.
Just type your prompt, and OpusClip will automatically identify the right moments to highlight for maximum impact. This is made possible thanks to our state-of-the-art video understanding, which, unlike LLMs that focus solely on words, understands and reasons across visuals, actions, emotions, audio, and dialogue. We built this in collaboration with the creative industry, from award-winning producers to major media and entertainment enterprises, to embed producer-grade storytelling and editing strategies in every clip. The new ClipAnything model has been fine-tuned based on data across our 6M users and incorporates their likes, dislikes, exports, and social performance.
Thank you to our incredible community of users and supporters. Your feedback and enthusiasm drive us to improve and innovate. We look forward to what the future holds as we continue to redefine the landscape of video editing with AI. Stay tuned for more exciting updates and innovations from OpusClip. Together, we are shaping the future of content creation.
|
|
|
| Anthropic's Claude Sonnet 4.5 codes for 30 hours straight |
|
Posted by: jasongeek - 12-30-2025, 03:12 PM - Forum: 2025
- No Replies
|
 |
Anthropic's Claude Sonnet 4.5 codes for 30 hours straight
Claude 4.5 autonomously built chat app with 11K lines of code in marathon session
by The Tech Buzz
PUBLISHED: Mon, Sep 29, 2025, 10:37 AM PDT | UPDATED: Tue, Dec 30, 2025, 7:10 AM PST
The Buzz - Claude Sonnet 4.5 coded autonomously for 30 hours, generating 11,000 lines for a Slack-like chat app
- Performance jumped 4x from Anthropic's previous 7-hour autonomous coding record set in May
- Model excels at computer navigation - 3x better than October 2024 versions at browser tasks
- Enterprise customers like Canva report success with complex, long-context engineering tasks
Anthropic just dropped Claude Sonnet 4.5, and it's rewriting the rules for autonomous AI. The model coded for 30 hours straight without human intervention, building a complete chat application with 11,000 lines of code. This isn't incremental progress - it's a 4x leap from their previous 7-hour benchmark that signals AI agents are ready for real enterprise workloads.
Anthropic just made every enterprise CTO sit up and take notice. The company's new Claude Sonnet 4.5 model didn't just write code - it built an entire chat application resembling Slack or Microsoft Teams during a 30-hour autonomous coding marathon. The AI generated 11,000 lines of production-ready code and only stopped when the job was complete.
This represents a massive leap forward in AI agent capabilities. Anthropic's previous Opus 4 model made headlines in May for running autonomously for seven hours. Now they've quadrupled that endurance while maintaining code quality throughout the extended session.
"We're calling Claude Sonnet 4.5 the best model in the world for real-world agents, coding, and computer use," Anthropic declared in today's announcement. The company positioned this as their strongest play yet in the rapidly intensifying battle with OpenAI and Google for AI agent supremacy.
The timing couldn't be more strategic. Just days ago, OpenAI launched Pulse, their morning routine ChatGPT feature, while Google continues pushing Bard capabilities. But Anthropic's approach focuses on sustained, complex tasks that mirror real enterprise workflows.
Early enterprise results are validating this strategy. Canva, one of the beta testers, reported Claude Sonnet 4.5 excelled at "complex, long-context tasks - from engineering in our codebase to in-product features and research." The model shows particular strength in cybersecurity, financial services, and research applications where sustained focus matters more than quick responses.
The computer navigation improvements are equally impressive. Dianne Penn, head of product management at Anthropic, told The Verge that Claude Sonnet 4.5 is "more than three times as skilled at navigating a browser and using a computer" compared to their October 2024 technology. This builds on Anthropic's Computer Use feature that debuted nearly a year ago.
Scott White, product lead for Claude.ai, described the model as operating at "chief-of-staff level." It can coordinate calendars across multiple people, analyze data dashboards for insights, and draft status updates based on meeting notes - essentially handling the cognitive overhead that burns out human executives.
The development infrastructure is equally ambitious. Anthropic is packaging Claude Sonnet 4.5 with virtual machines, memory management, context handling, and multi-agent support. "This essentially packages the same building blocks that power Claude Code - enabling developers to build their own cutting-edge agents," the company explained.
Penn shared a telling use case: she uses Claude Sonnet 4.5 for recruiting at Anthropic itself. "I have a continuous running prompt that says, 'Do a deep web search, come up with parameters for profiles to source for certain types of roles on my team,'" she explained. "It generates a spreadsheet with LinkedIn profiles so I can email them directly."
The model's sustained performance addresses a critical enterprise pain point. While consumer AI tools excel at quick tasks, enterprise workflows often require hours of sustained context and complex reasoning. A 30-hour coding session without degradation suggests AI agents can finally handle the marathon projects that define enterprise software development.
This positions Anthropic strategically against competitors. While OpenAI focuses on consumer engagement and Google pushes search integration, Anthropic is betting on enterprise utility and sustained performance. The company received feedback from "the GitHubs and Cursors of the world" - developer-focused platforms where coding endurance matters most.
The broader implications ripple across the enterprise software landscape. If AI agents can sustain complex tasks for 30+ hours, traditional software development cycles could compress dramatically. Enterprise buyers are already taking notice of these autonomous capabilities for everything from cybersecurity monitoring to financial analysis.
Claude Sonnet 4.5's 30-hour autonomous coding marathon isn't just a technical milestone - it's a signal that AI agents are ready for enterprise-grade workloads. While competitors chase consumer features, Anthropic is positioning itself as the go-to platform for sustained, complex business tasks. The real test will be whether enterprises can integrate these capabilities into existing workflows, but early results from companies like Canva suggest the transition is already underway.
https://www.techbuzz.ai/articles/anthrop...s-straight
|
|
|
|