A loosely moderated place to ask open-ended questions
Search asklemmy 🔍
If your post meets the following criteria, it’s welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
Icon by @Double_A@discuss.tchncs.de
- 0 users online
- 233 users / day
- 946 users / week
- 2.44K users / month
- 5.59K users / 6 months
- 1 subscriber
- 3.07K Posts
- 119K Comments
- Modlog
My main gripe with Google Lens is that it replaced Google Image Search on their browser. Used to be able to drag/drop an image into Google and it’d do an instant search for all similar/identical images. Now it opens Google Lens and it just gives me a bunch of “related links” instead of a proper image search.
To get the old functionality back, I need to use a “Google image search” add-on in my Firefox browser. It opens the old Google image search page.
I mostly use it to find higher resolution versions of old, grainy images, but Google Lens took that functionality away from me.
Tineye usually produces better results
You don’t need an extension to do that. I can use image search just fine with images.google.com.
ITT - Apple users who don’t know what Lens is and how useful it is 😂
Yeah. For apple users Lens is not the simple andtoid camera app (which yes can translate text on the fly and read QR codes) - Lens is a visual search tool that can answer queries based on pictures.
Like if you take a picture of a tree you can ask what species it is, or a picture of a person you can ask where the clothes are from etc.
Stupid hardware fixing I’ve never seen before - Lens will tell me what it is
Apple is about five years behind, then will announce it as a revelation they came up with, as usual 😂
And charge you for it
If you have the google app, on the landing page there is a camera icon. You can either use lens live that way, or upload a pic and it will tell you what plant/animal/etc it is, sometimes, or iirc it will translate texts. But it does both in real time too. I have an iPhone and use it a lot.
You can use Lens on iPhones just fine. It’s part of the google app.
If you have the google app, on the landing page there is a camera icon. You can either use lens live that way, or upload a pic and it will tell you what plant/animal/etc it is, sometimes, or iirc it will translate texts. But it does both in real time too. I have an iPhone and use it a lot.
If you have an iphone, it does that natively Take a picture of the text, select it and hit translate
Google Lens doesn’t just translate text, it contextually searches based on what it sees and interprets in an image. The translation stuff is already built into the Android camera app; Lens is something more
Sorry I haven’t used it in a long while
Tineye
Tineye sucks. Yandex has the best reverse image search
I use the RevEye extension that allows to image search with multiple image searches and Yandex always provides the best results.
I’m just a hobbyist, and not familiar with anything specific, or prepackaged in an app, but there are probably examples posted in the projects section of hugging face (=like the github + dev social-ish thing for AI). I’m not sure what is really possible locally as far as machine vision + text recognition + translation. I think it would be really difficult to build an accurate model to do this on the limited system memory of a phone. I’m not sure what/if Google is offloading onto their servers to make this happen or if they are tuning the hell out of an AI model to get it small enough. I mean, there is a reason why the Pixel line has a SoC called a Tensor core, (it is designed for AI), but I haven’t explored models or toolchains for mobile deployment.
What is it? What functionality are you looking for?
Lens is a visual search tool. Take a picture, ask it to search or answer a question based on what’s in the picture. Like take a picture of a tree and ask it what species is it
Oof. Sounds like it needs a huge back end with a ton of pre-processed data.
So, is Google Lens just an ‘reverse image search’? Never used it.
No, you can take a picture of something and it will pick out data from the image. e.g. You could photograph a menu in another language and translate it. It’s also a QR code scanner. It might do other things I don’t know about.
Edit: ok, reading this thread it does lots of other stuff 😆
Yandex has the best reverse image search, and I think it can also identify some stuff or products in the image. I don’t think it does text tho, I could be wrong.
Depends on what do u , use it for ? I hardly use it except for qr code scans , so any code scanners work for me !