In Star Wars Rebels, there was an E-XD-series infiltrator droid that could quickly take inventory of everything in a Rebel warehouse. With the advanced object recognition capabilities of modern AI, it’s only a matter of time before an app for Android can accurately and rapidly identify and store objects in real-time from video capture. This could be similar to a home inventory app where users only need to capture video and move around the house instead of taking pictures and labeling items. When do you think such an app will become available? Also, what is the closest app available right now?
edit: I didn’t say offline or on-device, I don’t know why everyone assumes that. I mean a service offered through an Android app.
Modern AI, as you’re seeing it today, is processed by massive data centers online with thousands of processing units running in parallel, not by your local device. Your device would be way too slow to expect any sort of realtime object recognition, at least with the current state of technology.
TL;DR - I don’t think it’ll happen anytime soon, at least not on your local device. It would take a super fast and steady connection to the AI service.
Don’t underestimate the potential for optimization when you can constrain the problem to a narrow range of uses. Model pruning and custom silicon go far. Voice assistants used to be purely cloud compute, but a lot of common use cases are done on device now.
Yes, I’ve been testing FUTO Voice Recognition lately. It’s awesome as hell, but it is far from realtime. And this ain’t even object recognition, it’s only voice recognition.
https://voiceinput.futo.org/
https://play.google.com/store/apps/details?id=org.futo.voiceinput
dunno, some mobile devices are starting to ship with pretty passable gpus nowadays
We’re not talking about image rendering, we’re talking about image recognition. Although they may seem related, they are not.
It’s one thing to sling a 3D model and textures to a GPU, but it’s totally a different thing to take a photo and sling it against a humongous AI model being run at a datacenter with billions of images to compare it to.
image recognition is also done on gpus, a powerful enough gpu on say, a phone can do a variety of ai tasks
a mobile integrated intel gpu can already do facial recognition on a video stream for example
data centers have to be big because they centralize a lot of work
Recognizing a face is one thing, that’s more or less just knowing certain geometries. Recognizing who that face actually is, or what model car that is, or whatever, requires processing through a huge database of information.
Also, as of right now, not all AI systems are even smart enough to distinguish a human from a monkey. They both have faces yo…
tell that to my frigate nvr
No shit Watson, that’s my whole point. AI as anyone today knows it is cloud based, meaning you’re tethered to the internet. Your device can’t process it all by its little measly lonesome self.
you should look up what frigate is.
my desktop gpu can generate ai art pretty quickly too
Again, we are not discussing generating AI, we are discussing recognizing images from a camera. That requires parallel processing and many terabytes, if not even petabytes of images to compare to.
You got a petabyte of storage and a 1024 core processor to scavenge through all those images to tell you that the picture of your butt plug looks like a purple booty packer 3000?
Honestly, I expect some form of it in the next five years. Tech can move fast when it wants to and there’s 💵 involved.
Is Moore’s Law Finally Dead?