Neuromorphic
Brain-inspired computing using nanoscale devices
Intel Labs Neuromorphic
Santa Clara, United States
Developer of Loihi neuromorphic research chip and Hala Point system
BrainChip
Laguna Hills, United States
Commercial neuromorphic AI processor company with Akida chip
SynSense
Zurich, Switzerland
Neuromorphic vision and AI processor company spun from ETH Zurich
Innatera Nanosystems
Delft, Netherlands
Ultra-low-power neuromorphic sensing and processing company
GrAI Matter Labs
Paris, France
Developer of GrAI VIP neuromorphic processor for edge AI
Rain Neuromorphics
San Francisco, United States
Analog-digital neuromorphic chip for brain-like AI acceleration
NanoLund
Lund, Sweden
Lund University nanoscience center
Luminous Computing
Palo Alto, United States
Optical computing for AI at massive scale
LightOn
Paris, France
Photonic co-processors for AI
Tenstorrent
Toronto, Canada
AI processors using RISC-V and novel architecture
Cerebras Systems
Sunnyvale, United States
Wafer-scale AI chips
Graphcore
Bristol, United Kingdom
Intelligence Processing Units for AI
Habana Labs
Tel Aviv, Israel
AI training and inference processors (Intel)
Hailo
Tel Aviv, Israel
Hailo is a pioneering Israeli semiconductor company specializing in high-performance edge AI processors designed for real-time applications across automotive, smart city, industrial, and retail sectors. Founded in 2017, the company has emerged as a leader in delivering energy-efficient AI acceleration at the edge, enabling devices to perform complex deep learning tasks locally without relying on cloud connectivity. Hailo's flagship product, the Hailo-8 AI processor, delivers up to 26 tera-operations per second (TOPS) while consuming minimal power, making it ideal for battery-powered and embedded systems. The architecture employs a novel dataflow approach that optimizes neural network execution by restructuring networks to match hardware capabilities, achieving superior performance-per-watt ratios compared to traditional GPU and CPU-based solutions. The company's newer Hailo-15 vision processor extends these capabilities specifically for automotive applications, supporting advanced driver assistance systems (ADAS) and autonomous driving with multiple camera inputs and sensor fusion. Hailo's technology addresses critical challenges in edge AI deployment, including latency reduction, privacy preservation through on-device processing, bandwidth optimization, and operational cost reduction. The processors support all major deep learning frameworks including TensorFlow, PyTorch, ONNX, and Keras, with a comprehensive software development kit that simplifies model deployment. With over $200 million in funding and strategic partnerships with major automotive OEMs, camera manufacturers, and system integrators worldwide, Hailo is enabling the next generation of intelligent edge devices across industries, from smart cameras and robotics to healthcare devices and industrial automation systems.
Mythic
Austin, United States
Mythic is a revolutionary semiconductor company pioneering analog compute-in-memory technology for artificial intelligence acceleration. Founded in 2012 and based in Austin, Texas, Mythic has developed a breakthrough approach to AI processing that fundamentally differs from traditional digital architectures by storing neural network weights directly in analog flash memory and performing matrix computations using analog circuits. This innovative architecture delivers exceptional energy efficiency and performance density, enabling powerful AI inference capabilities in edge devices with minimal power consumption. The company's flagship M1076 Analog Matrix Processor combines analog computation with digital precision, achieving up to 25 trillion operations per second per watt, orders of magnitude more efficient than conventional digital processors. By eliminating the need to constantly move data between memory and processing units, Mythic's technology overcomes the von Neumann bottleneck that limits traditional computing architectures. The M1076 chip integrates multiple tiles of analog compute arrays, enabling scalable performance for demanding AI workloads including computer vision, natural language processing, and sensor fusion applications. Mythic's solution is particularly suited for edge AI applications in smart cameras, drones, augmented reality devices, automotive systems, and industrial IoT where power constraints are critical. The technology supports standard AI frameworks and models, with a software stack that enables seamless deployment of networks trained on conventional platforms. With over $100 million in funding from leading venture capital firms and strategic investors, Mythic is commercializing analog AI processing for next-generation intelligent edge devices, offering a compelling alternative to power-hungry digital accelerators while maintaining accuracy and flexibility required for production deployments.
Untether AI
Toronto, Canada
Untether AI is a Canadian semiconductor innovator pioneering at-memory compute architecture for artificial intelligence inference acceleration. Founded in 2018 and headquartered in Toronto, the company has developed a revolutionary approach that fundamentally reimagines how AI computations are performed by bringing compute directly to where data resides in memory, eliminating the performance bottleneck of data movement that plagues traditional architectures. Their flagship tsunAImi inference accelerator chip delivers industry-leading performance per watt and performance per dollar for AI workloads in data centers and edge deployments. The tsunAImi architecture features a massively parallel array of processing elements tightly coupled with memory banks, enabling thousands of simultaneous compute operations without the energy-intensive data transfers required by conventional von Neumann designs. This innovative design achieves over 2 petaOPS of AI inference performance while maintaining exceptional energy efficiency, making it ideal for deploying large-scale AI models in cloud infrastructure, autonomous systems, and intelligent edge applications. Untether's runAI software platform provides a complete development environment supporting TensorFlow, PyTorch, and ONNX frameworks, enabling seamless migration of existing AI models to their hardware with minimal engineering effort. The technology addresses critical pain points in AI deployment including inference latency, power consumption, total cost of ownership, and scalability for production workloads. With over $150 million in funding from prominent venture capital and strategic investors, Untether AI is targeting high-performance computing markets where AI inference demands exceed capabilities of conventional GPU and CPU-based solutions, including autonomous vehicles, robotics, natural language processing, computer vision, and large-scale recommendation systems deployed by hyperscale cloud providers and enterprises.
Blaize
El Dorado Hills, United States
Blaize is an advanced edge AI computing company developing programmable graph streaming processor architecture for intelligent edge applications across automotive, smart vision, and enterprise markets. Founded in 2010 and based in El Dorado Hills, California, Blaize has pioneered a unique graph native processing architecture that efficiently executes AI workloads by representing neural networks as computational graphs and streaming data through optimized processing pathways. This innovative approach delivers superior performance, power efficiency, and flexibility compared to traditional AI accelerator designs. The company's Pathfinder P-Series processors are designed for embedded edge AI applications requiring real-time processing with minimal power consumption, supporting advanced computer vision, sensor fusion, and decision-making capabilities. The Xplorer X-Series targets higher-performance edge server and intelligent gateway applications, enabling AI inference at scale for multiple concurrent workloads. Blaize's El Cano software development kit provides comprehensive tools for developing, optimizing, and deploying AI models across their processor family, supporting popular frameworks including TensorFlow, PyTorch, ONNX, and Caffe. The architecture's graph streaming design enables efficient execution of diverse neural network topologies including convolutional networks, recurrent networks, transformers, and hybrid models. Key applications include autonomous vehicles requiring multi-sensor processing, smart retail analytics, industrial robotics and inspection systems, intelligent surveillance and security, and medical imaging devices. With over $200 million in funding and strategic partnerships with tier-one automotive manufacturers and industrial leaders, Blaize is enabling the next generation of edge AI deployments where latency, privacy, bandwidth, and energy efficiency are paramount concerns for production systems.
Recogni
San Jose, United States
Recogni is a specialized semiconductor company developing ultra-high-performance AI vision processors specifically designed for autonomous vehicle perception systems. Founded in 2017 in San Jose, California, Recogni is addressing the massive computational demands of self-driving vehicles that must process data from multiple high-resolution cameras, radar, and lidar sensors simultaneously while making real-time decisions with exceptional accuracy and reliability. The company's proprietary vision AI processor architecture delivers unprecedented performance density, achieving over 1,000 tera-operations per second (TOPS) of AI compute performance optimized specifically for computer vision and sensor fusion workloads critical to autonomous driving. Unlike general-purpose AI accelerators, Recogni's chips are purpose-built for automotive perception tasks including object detection, classification, tracking, semantic segmentation, depth estimation, and multi-sensor fusion across camera, radar, and lidar inputs. The architecture incorporates specialized hardware blocks for image signal processing, vision preprocessing, neural network acceleration, and post-processing operations, creating an efficient end-to-end pipeline from raw sensor data to perception outputs. Recogni's technology enables Level 4 and Level 5 autonomous driving capabilities while meeting stringent automotive requirements for safety, reliability, temperature tolerance, and cost-effectiveness. The processor design incorporates redundancy and functional safety features compliant with ISO 26262 automotive safety standards, essential for production deployment in safety-critical applications. The company's software stack supports leading AI frameworks and provides tools for optimizing perception models developed by automotive OEMs and tier-one suppliers. With over $100 million in funding from automotive-focused venture capital and strategic investors, Recogni is partnering with major automakers and autonomous vehicle developers to deliver the next generation of perception computing for self-driving cars, trucks, and robotaxis.
Cambricon
Beijing, China
Cambricon Technologies is China's leading AI chip designer and one of the world's first companies dedicated exclusively to developing processors for artificial intelligence applications. Founded in 2016 as a spinout from the Chinese Academy of Sciences and headquartered in Beijing, Cambricon went public on Shanghai's STAR Market in 2020, becoming one of the most prominent Chinese AI semiconductor companies. The company designs and develops a comprehensive portfolio of AI processors spanning cloud training and inference, edge computing, and intelligent terminal devices. Cambricon's flagship MLU (Machine Learning Unit) series processors target data center AI workloads, competing with NVIDIA's GPUs and other AI accelerators for training and deploying large-scale machine learning models. The MLU product line includes both training-focused chips like the MLU370 and inference-optimized processors like the MLU270, offering scalability from single-chip solutions to multi-chip clusters for enterprise and cloud deployments. For edge applications, Cambricon offers the Siyuan series of processors designed for intelligent cameras, robots, drones, and IoT devices requiring AI capabilities with constrained power budgets. The company has developed a complete AI computing ecosystem including the Neuware software development platform supporting major deep learning frameworks, optimized neural network libraries, and tools for model compression and deployment. Cambricon's technology is deployed across diverse industries including smart cities, autonomous driving, intelligent manufacturing, healthcare, finance, and consumer electronics. The company serves major Chinese technology firms, cloud service providers, and government agencies implementing AI infrastructure. As China pursues semiconductor self-sufficiency, Cambricon plays a strategic role in developing indigenous AI computing capabilities independent of foreign technology. With over 1,000 employees including leading AI and chip design engineers, Cambricon continues advancing AI processor architectures, software tools, and applications while expanding its market presence domestically and internationally.
Enflame Technology
Shanghai, China
Enflame Technology is a prominent Chinese AI chip company specializing in high-performance processors for AI training and inference workloads in cloud data centers and edge deployments. Founded in 2018 and based in Shanghai, Enflame has rapidly emerged as a major player in China's domestic AI semiconductor industry, developing competitive alternatives to dominant international AI accelerators. The company's flagship CloudBlazer series processors are designed for large-scale AI training in cloud environments, delivering hundreds of teraFLOPS of compute performance optimized for deep learning workloads including large language models, computer vision networks, and recommendation systems. CloudBlazer chips feature innovative architecture incorporating high-bandwidth memory, efficient interconnects for multi-chip scaling, and optimized tensor computation units that accelerate matrix operations fundamental to neural network training and inference. Enflame's DTU (Deep Learning Training Unit) products target enterprise AI infrastructure, providing cost-effective performance for organizations deploying AI at scale across various industries. The company has developed a comprehensive software ecosystem called TopsRider that supports major AI frameworks including TensorFlow, PyTorch, and PaddlePaddle, enabling seamless deployment of existing models with minimal code changes. Enflame emphasizes ease of migration and developer productivity through extensive optimization libraries, debugging tools, and performance profiling capabilities. The technology serves diverse applications including autonomous driving perception and planning, natural language processing and chatbots, recommendation engines for e-commerce and content platforms, intelligent video analytics, and scientific computing simulations. With over $500 million in funding from leading Chinese venture capital firms and strategic investors, Enflame has built a team of over 500 engineers and established partnerships with major cloud service providers, telecommunications companies, and internet giants throughout China. The company represents China's push toward AI semiconductor independence and technological self-reliance in critical computing infrastructure.
Horizon Robotics
Beijing, China
AI processors for intelligent driving
Black Sesame Technologies
Shanghai, China
Autonomous driving AI chips
Intel AI
Santa Clara, United States
Global semiconductor leader with AI accelerators and neuromorphic computing research
Google TPU
Mountain View, United States
Custom tensor processing units designed for machine learning workloads
Cerebras Systems
Sunnyvale, United States
Pioneer in wafer-scale AI chips with world's largest processor
SambaNova Systems
Palo Alto, United States
Dataflow architecture for AI with reconfigurable computing
IBM Research Neuromorphic
Yorktown Heights, United States
IBM's brain-inspired computing research with TrueNorth and successor chips
Rain AI
San Francisco, United States
Brain-inspired computing using analog memristor technology
Qualcomm AI
San Diego, United States
Mobile technology leader with on-device AI acceleration
Apple Neural Engine
Cupertino, United States
Apple's dedicated neural processing units in Apple Silicon chips
MediaTek AI
Hsinchu, Taiwan
Fabless semiconductor company with APU technology for mobile and edge AI
Biren Technology
Shanghai, China
Chinese GPU startup developing high-performance AI chips
d-Matrix
Santa Clara, United States
In-memory computing for AI inference with digital processing-in-memory
Rebellions
Seoul, South Korea
Korean AI chip startup developing neural processing units
FuriosaAI
Seoul, South Korea
Korean AI semiconductor company for data center inference
Syntiant
Irvine, United States
Ultra-low-power AI chips for always-on edge applications
Kneron
San Diego, United States
Edge AI processor company with reconfigurable NPU technology
Eta Compute
Westlake Village, United States
Ultra-low-power AI processing for battery-operated edge devices
Axelera AI
Eindhoven, Netherlands
In-memory computing for edge AI with breakthrough efficiency
Roviero
Salt Lake City, United States
Neuromorphic AI technology for edge processing and autonomy
Applied Brain Research
Waterloo, Canada
Neuromorphic AI software and brain-inspired computing
Perceive
San Jose, United States
Ultra-low-power AI inference processors for edge devices
AIStorm
San Jose, United States
AI-in-sensor technology eliminating the analog-to-digital conversion bottleneck
Quadric
Burlingame, United States
General-purpose neural processing units for edge AI
AlphaICs
Bengaluru, India
Edge AI processor company from India with Real AI Processor
Koniku
San Jose, United States
Wetware company merging synthetic biology with silicon for neuromorphic computing
Maxim Integrated AI
San Jose, United States
Ultra-low-power AI microcontrollers for edge applications
NXP AI
Eindhoven, Netherlands
Semiconductor company with edge AI solutions for automotive and industrial
Renesas AI
Tokyo, Japan
Embedded AI solutions for automotive and industrial edge applications
Microchip AI
Chandler, United States
Embedded AI solutions across MCUs, MPUs, and PolarFire FPGAs
Lattice AI
Hillsboro, United States
Low-power FPGA solutions for AI at the edge
OpenAI
San Francisco, United States
AI research laboratory developing safe and beneficial artificial general intelligence including GPT models
Anthropic
San Francisco, United States
AI safety company developing reliable and interpretable AI systems including Claude
Groq
Mountain View, United States
AI inference company developing Language Processing Units for ultra-fast AI inference
Mythic
Austin, United States
AI chip company using analog compute-in-memory for edge AI applications
Hailo
Tel Aviv, Israel
Edge AI processor company with specialized chips for deep learning at the edge
Blaize
El Dorado Hills, United States
Edge AI computing company with graph streaming processors for automotive and industrial applications
Inflection AI
Palo Alto, United States
AI company that developed Pi personal AI assistant (acquired by Microsoft)
Mistral AI
Paris, France
French AI startup developing open-weight large language models
Aleph Alpha
Heidelberg, Germany
European AI company developing sovereign AI solutions for enterprises and governments