Coral M.2 Accelerator B+M key
Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Works with Debian Linux Integrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Tech specs ML accelerator Google Edge TPU coprocessor Connector M.2 (B+M key) Dimensions 22 mm x 80 mm (M.2-2280-B-M-S3)
₹4,899.00*
Coral M.2 Accelerator A+E key
Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Works with Debian Linux Integrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Tech Specs ML accelerator Google Edge TPU coprocessor Connector M.2 (A+E key) Dimensions 22 mm x 30 mm (M.2-2230-A-E-S3)
₹4,899.00*
Coral Mini PCIe Accelerator
Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Works with Debian Linux Integrates with any Debian-based Linux system with a compatible card module slot. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Tech Specs ML accelerator Google Edge TPU coprocessor Connector Half-size Mini PCIe Dimensions 30 x 26.8 mm
₹4,899.00*
Coral System-on-Module (SoM)
Provides a complete system The Coral SoM is a fully-integrated Linux system that includes NXP's iMX8M system-on-chip (SoC), eMMC memory, LPDDR4 RAM, Wi-Fi, and Bluetooth, and the Edge TPU coprocessor for ML acceleration. It runs a derivative of Debian Linux we call Mendel. Performs high-speed ML inferencing The on-board Edge TPU coprocessor is capable of performing 4 trillion operations (tera-operations) per second (TOPS), using 0.5 watts for each TOPS (2 TOPS per watt). For example, it can execute state-of-the-art mobile vision models such as MobileNet v2 at 400 FPS, in a power efficient manner. See more performance benchmarks. Integrates with your custom hardware The SoM connects to your own baseboard hardware with three 100-pin connectors. Supports TensorFlow Lite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Edge TPU. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Also available with a baseboard as part of the Coral Dev Board. Tech specs CPU NXP i.MX 8M SoC (quad Cortex-A53, Cortex-M4F) GPU Integrated GC7000 Lite Graphics ML accelerator Google Edge TPU coprocessor RAM 1 GB LPDDR4 (option for 2 GB or 4 GB coming soon) Flash memory 8 GB eMMC Wireless Wi-Fi 2x2 MIMO (802.11b/g/n/ac 2.4/5GHz) and Bluetooth 4.2 Dimensions 48mm x 40mm x 5mm
₹16,099.00*
Google AIY Vision Kit 1.1
Introduction The AIY Vision Kit from Google lets you build your own intelligent camera that can see and recognize objects using machine learning. All of this fits in a handy little cardboard cube, powered by a Raspberry Pi. Everything you need is provided in the kit, including the Raspberry Pi. Meet your kit Welcome! Let’s get started The following instructions show you how to assemble your AIY Vision Kit, connect to it, and run the Joy Detector demo, which recognizes faces and detects if they're smiling. Then you can try running some other demos that detect other kinds of objects with the camera. You can even install your own custom-trained TensorFlow model. Time required to build: 1.5 hours Check your kit version These instructions are for Vision Kit 1.1. Check your kit version by looking on the back of the white box sleeve in the bottom-left corner. If it says version 1.1, proceed ahead! If it doesn’t have a version number, follow the assembly instructions for the earlier version. GATHER ADDITIONAL ITEMS You’ll need some additional things, not included with your kit, to build it: Micro USB power supply: The best option is to use a USB Power supply that can provide 2.1 Amps of power via micro-USB B connector. The second-best choice is to use a phone charger that also provides 2.1A of power (sometimes called a fast charger). Don't try to power your Raspberry Pi from your computer. It will not be able to provide enough power and it may corrupt the SD card, causing boot failures or other errors. Below are two different options to connect to your kit. Choose the one that works best for you, based on what you have available: OPTION 1: USE THE AIY PROJECTS APP Choose this option if you have access to an Android smartphone and a separate computer. You’ll need: Android smartphone Windows, Mac, or Linux computer Wi-Fi connection Optional: Monitor or TV (any size will work) with a HDMI input Optional: Normal-sized HDMI cable and mini HDMI adapter OPTION 2: USE A MONITOR, MOUSE, AND KEYBOARD Choose this option if you don’t have access to an Android smartphone. You’ll need: Windows, Mac, or Linux computer Mouse Keyboard Monitor or TV (any size will work) with a HDMI input Normal-sized HDMI cable and mini HDMI adapter Adapter to attach your mouse and keyboard to the kit. Below are two different options. Adapter option A: USB On-the-go (OTG) adapter cable to convert the Raspberry Pi USB micro port to a normal-sized USB port. You can then use a keyboard/mouse combo that requires only one USB port. Adapter option B: Micro USB Hub that provides multiple USB ports to connect to any traditional keyboard and mouse. Get to know the hardware Open your kit and get to know what’s inside. Take note that the Electrical Hardware bag is underneath the Mechanical Hardware bag. List of materials IN YOUR KIT 1 Vision Bonnet x1 2 Raspberry Pi Zero WH x1 3 Raspberry Pi Camera v2 x1 4 Long flex cable x1 5 Push button x1 6 Button harness x1 7 Micro USB cable x1 8 Piezo buzzer x1 9 Privacy LED x1 10 Short flex cable x1 11 Button nut x1 12 Tripod nut x1 13 LED bezel x1 14 Standoffs x2 15 Micro SD card x1 16 Camera box cardboard x1 17 Internal frame cardboard x1 Tutorial Assembly guide
₹12,999.00*
Google AIY Voice Kit 1.0
Need directions to your nearest dry cleaner? Or maybe you need to send a hands-free email? Perhaps you just want to know what the weather's like in Timbuktu. Ask your new little friend – the Google AIY Voice Bot! This no-soldering-required kit is an awesome addition to your Raspberry Pi 3 computer, and turns it into a smart home assistant! Google AIY Projects brings do-it-yourself artificial intelligence to your maker projects. With this AIY Voice Kit from Google, you can build a standalone voice recognition system using the Google Assistant, or add voice recognition and natural language processing to your Raspberry Pi based projects. The kit includes all of the components needed to assemble the basic kit that works with the Google Assistant SDK as well as on-device voice recognition with TensorFlow. Kit includes: Voice HAT accessory board - fully assembled Voice HAT microphone board - fully assembled Plastic standoffs 3” speaker (wires attached) Arcade-style push button 4-wire button cable 5-wire daughter board cable External cardboard box Internal cardboard frame Lamp Micro-switch Lamp holder
₹2,749.00*
Google AIY Voice Kit 2.0
Introduction The AIY Voice Kit from Google lets you build your ow natural language processor and connect it to the Google Assistant or Cloud Speech-to-Text service, allowing you to ask questions and issue voice commands to your programs. All of this fits in a handy little cardboard cube, powered by a Raspberry Pi. Everything you need is provided in the kit, including the Raspberry Pi Meet your kit Welcome! Let’s get started The following instructions show you how to assemble your AIY Voice Kit, connect to it, and run the Google Assistant demo, which turns your kit into a voice assistant that responds to your questions and commands. Then you can try some other sample code or use the Google Cloud Speech-to-Text service, which converts spoken commands into text you can use to trigger actions in your code. Time required to build: 1.5 hours Check your kit version These instructions are for Voice Kit 2.0. Check your kit version by looking on the back of the white box sleeve in the bottom-left corner. If it says version 2.0, proceed ahead! If it doesn’t have a version number, follow the assembly instructions for the earlier version. GATHER ADDITIONAL ITEMS You’ll need some additional things, not included with your kit, to build it: 2mm flat screwdriver: For tightening the screw terminals Micro USB power supply: The best option is to use a USB Power supply that can provide 2.1 Amps of power via micro-USB B connector. The second-best choice is to use a phone charger that also provides 2.1A of power (sometimes called a fast charger). Don't try to power your Raspberry Pi from your computer. It will not be able to provide enough power and it may corrupt the SD card, causing boot failures or other errors. Wi-Fi connection Below are two different options to connect to your kit to Wi-Fi, so that you can communicate with it wirelessly. OPTION 1: USE THE AIY PROJECTS APP Choose this option if you have access to an Android smartphone and a separate computer. You’ll need: Android smartphone Windows, Mac, or Linux computer OPTION 2: USE A MONITOR, MOUSE, AND KEYBOARD Choose this option if you don’t have access to an Android smartphone. You’ll need: Windows, Mac, or Linux computer Mouse Keyboard Monitor or TV (any size will work) with a HDMI input Normal-sized HDMI cable and mini HDMI adapter Adapter to attach your mouse and keyboard to the kit. Below are two different options. Adapter option A: USB On-the-go (OTG) adapter cable to convert the Raspberry Pi USB micro port to a normal-sized USB port. You can then use a keyboard/mouse combo that requires only one USB port. Adapter option B: Micro USB Hub that provides multiple USB ports to connect to any traditional keyboard and mouse. Get to know the hardware Open your kit and get familiar with what’s inside. List of materials IN YOUR KIT 1 Voice Bonnet x1 2 Raspberry Pi Zero WH x1 3 Speaker x1 4 Micro SD card x1 5 Push button x1 6 Button nut x1 7 Button harness x1 8 Standoffs x2 9 Micro USB cable x1 10 Speaker box cardboard x1 11 Internal frame cardboard x1 Tutorial Build your kit Connect to your kit Setup the Assistant Maker's guide More Information Further products by Google
₹5,249.00*
Tip
Google Coral Development Board
Description A development board to quickly prototype on-device ML products. Scale from prototype to production with a removable system-on-module (SOM). Supports TFLite No need to build models from the ground up. TensorFlow Lite models can be compiled to run on the Coral Dev Board. Scale from prototype to production Considers your manufacturing needs. The SOM can be removed from the baseboard, ordered in bulk, and integrated into your hardware. Includes a full system SOC + ML + Connectivity all on the board running a derivative of Debian Linux we call Mendel, so you can run your favourite Linux tools with this board. Supports AutoML Vision Edge Easily build and deploy fast, high-accuracy custom image classification models to your device with AutoML Vision Edge. Tech specs Edge TPU Module CPU NXP i.MX 8M SOC (quad Cortex-A53, Cortex-M4F) GPU Integrated GC7000 Lite Graphics ML accelerator Google Edge TPU coprocessor RAM 1 GB LPDDR4 Flash memory 8 GB eMMC Wireless Wi-Fi 2x2 MIMO (802.11b/g/n/ac 2.4/5GHz) Bluetooth 4.1 Dimensions 48mm x 40mm x 5mm Baseboard Flash memory MicroSD slot USB Type-C OTG Type-C power Type-A 3.0 host Micro-B serial console LAN Gigabit Ethernet port Audio 3.5mm audio jack (CTIA compliant) Digital PDM microphone (x2) 2.54mm 4-pin terminal for stereo speakers Video HDMI 2.0a (full size) 39-pin FFC connector for MIPI-DSI display (4-lane) 24-pin FFC connector for MIPI-CSI2 camera (4-lane) GPIO 3.3V power rail 40 - 255 ohms programmable impedance ~82 mA max current Power 5V DC (USB Type-C) Dimensions 88 mm x 60 mm x 24mm
₹20,999.00*
Google Coral USB Accelerator
Description A USB accessory that brings machine learning inferencing to existing systems. Works with Raspberry Pi and other Linux systems. Local inferencing Run on-device ML inferencing on the Edge TPU designed by Google. Works with Debian Linux Connect to any Linux-based system with an included USB Type-C cable. Supports TensorFlow lite No need to build models from the ground up. Tensorflow Lite models can be compiled to run on USB Accelerator. Tech Specs ML accelerator Google Edge TPU coprocessor Connector USB Type-C* (data/power) Dimensions 65 mm x 30 mm Compatible with Raspberry Pi boards at USB 2.0 speeds only. Supported Operating Systems Debian Linux Supported Frameworks TensorFlow Lite
₹9,499.00*