• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: October 6th, 2023

help-circle
  • lets_get_off_lemmy@reddthat.comtoMemes@lemmy.mlJobs
    link
    fedilink
    arrow-up
    30
    arrow-down
    6
    ·
    edit-2
    2 months ago

    Long, boring, hard to pay attention to. I read philosophy and theory sometimes but it’s few and far between for those reasons. I really have to be in a special mood to sit down and read something that dense.

    Edit: I’m not the original commenter










  • I’m an AI researcher and yes, that’s basically right. There is no special “lighting mechanism” portion of the network designed before training. Just, after seeing enough images with correct lighting (either for text to image transformer models or GANs), it will understand what correct lighting should look like. It’s all about the distribution of the training data. A simple example is this-person-does-not-exist.com. All of the training images are high resolution, close-up, well-lit headshots. If all the training data instead had unrealistic lighting, you would get unrealistic lighting out. If it’s something like 50/50, you’ll get every part of that spectrum between good lighting and bad lighting at the output.

    That’s not to say that the overall training scheme of especially something like GPT-4 doesn’t include secondary training operations for more complex tasks. But lighting of images is a simple thing to get correct with enough training images.

    As an aside, I said that website above is a simple example, but I remember less than 6 years ago when that came out and it was revolutionary, so it’s crazy how fast the space has moved forward in such a short time.

    Edit: to answer the multiple subjects question: it probably has seen fewer images with multiple subjects and doesn’t have enough “knowledge” from it’s training data to accurately apply lighting in those scenarios. And you can imagine lighting is more complex in a scene with more subjects so it’s more difficult for the model to use a general solution it’s seen many times to fit the more complex problem.


  • Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won’t just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don’t involve training a model that’s an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn’t follow a Medium article, let alone do ground breaking research.

    But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!





  • I agree with him. You have to take measures to protect the populace, like with traffic laws. If people can’t abide by those rules and the science is sound (which it is in this case and in the case of traffic laws), then measures have to be taken to protect the community from those that refuse to abide by the verifiably safer option without due cause.

    What those measures are can be deliberated amongst the community. Could be fines, could be jail time. I don’t know what would compel someone to get a vaccine, but that could be determined over time.






  • I responded to your other comment, but I like this question too. I haven’t been addicted to a substance, but I can firmly say for other things that the answer is “No”. I’m not blacked out, I’m completely present when I’m making this choice, but sometimes there’s a constant justification of “ok I’ll do it this last time and tomorrow is when I’ll resist it.” And you keep doing that. And that voice gets weaker over time to where you just start accepting that this is what you do now. And that often comes with self-loathing and frustration.