I think he means “I will give you a child, choose any of them. I don’t know where to put them any longer and they don’t even seem to like me.”
I think he means “I will give you a child, choose any of them. I don’t know where to put them any longer and they don’t even seem to like me.”
and Trump would just… “your beer? Haven’t seen it. There’s just MY two glasses of beer here. A great beer, the greatest. My uncle invented beer, Fred Budweiser Trump. Great IQ, very good genes!”
like sending Vance literally following her around?
laugh all you want, but YOU are next, former Twitter users who refused to pay for their blue badge and had the gall to move to Mastodon or others!
I think they don’t matter with outrage, because outrage explodes in ways that are hard to predict. I mean, I can see the problem with the ad now that it has been pointed out to me. After reading about it repeatedly, I now find it bad and ridiculous and what were they thinking? But at a first look, as a test audience I would have probably rated it as “meh, ok”.
It is about fragility, like others said, but It is also about uniqueness, in the sense of “oh, so you think you’re soo special!”
to be fair, he did turn orange
I only have a limited and basic understanding of Machine Learning, but doesn’t training models basically work like: “you, machine, spit out several versions of stuff and I, programmer, give you a way of evaluating how ‘good’ they are, so over time you ‘learn’ to generate better stuff”? Theoretically giving a newer model the output of a previous one should improve on the result, if the new model has a way of evaluating “improved”.
If I feed a ML model with pictures of eldritch beings and tell them that “this is what a human face looks like” I don’t think it’s surprising that quality deteriorates. What am I missing?
Just wanted to point out that the Pinterest examples are conflating two distinct issues: low-quality results polluting our searches (in that they are visibly AI-generated) and images that are not “true” but very convincing,
The first one (search results quality) should theoretically be Google’s main job, except that they’ve never been great at it with images. Better quality results should get closer to the top as the algorithm and some manual editing do their job; crappy images (including bad AI ones) should move towards the bottom.
The latter issue (“reality” of the result) is the one I find more concerning. As AI-generated results get better and harder to tell from reality, how would we know that the search results for anything isn’t a convincing spoof just coughed up by an AI? But I’m not sure this is a search-engine or even an Internet-specific issue. The internet is clearly more efficient in spreading information quickly, but any video seen on TV or image quoted in a scientific article has to be viewed much more skeptically now.
…especially you, Elon