VeriSilicon (688521.SH) recently announced that its AI-ISP custom chip solution has been successfully adopted in a customer’s mass-produced smartphones, reaffirming the company’s comprehensive one-stop custom silicon service capabilities in AI vision processing. VeriSilicon’s AI-ISP custom chip solution can integrate proprietary or third-party Neural Network Processing Unit (NPU) IP and Image Signal Processing (ISP) IP. By combining traditional image processing techniques with AI algorithms, ...
VeriSilicon (688521.SH) today announced the latest advancements in its high-performance and scalable GPGPU-AI computing IPs, which are now empowering next-generation automotive electronics and edge server applications. Combining programmable parallel computing with a dedicated Artificial Intelligence (AI) accelerator, these IPs offer exceptional computing density for demanding AI workloads such as Large Language Model (LLM) inference, multimodal perception, and real-time decision-making in ther...
Alnylam Pharmaceuticals, Inc. (Nasdaq: ALNY), the leading RNA interference (RNAi) therapeutics company, today announced that the European Commission (EC) has granted approval for the treatment of wild-type or hereditary transthyretin amyloidosis in adult patients with cardiomyopathy (ATTR-CM) as an additional indication for its orphan RNAi therapeutic AMVUTTRA® (vutrisiran). The approval broadens the indication for AMVUTTRA, which now becomes the first and only RNAi therapeutic approved by the ...
Capcom Co., Ltd. (TOKYO:9697) today announced that Monster Hunter Rise: Sunbreak, released on June 30, 2022, has sold over 10 million cumulative units globally. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250604482423/en/ Monster Hunter Rise: Sunbreak Logo Monster Hunter Rise: Sunbreak is a massive premium expansion for Mon...
VeriSilicon (688521.SH) today announced that its ultra-low energy and high-performance Neural Network Processing Unit (NPU) IP now supports on-device inference of large language models (LLMs) with AI computing performance scaling beyond 40 TOPS. This energy-efficient NPU architecture is specifically designed to meet the increasing demand for generative AI capabilities on mobile platforms. It not only delivers powerful computing performance for AI PCs and other end devices, but is also optimized...