Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

joepie91@fedi.slightly.techJ

joepie91@fedi.slightly.tech

@joepie91@fedi.slightly.tech
Über
Beiträge
3
Themen
0
Shares
0
Gruppen
0
Follower
0
Folge ich
0

View Original

Beiträge

Aktuell Bestbewertet Umstritten

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    joepie91@fedi.slightly.techJ joepie91@fedi.slightly.tech

    @firefoxwebdevs "Without the user's request" is quite ambiguous, though. I'm reminded here of Google, which put the AI tab before the Web/All tab, displacing it so that people would unintentionally hit the AI button and "request" it. It's a small and plausibly-deniable change that nevertheless violates the user's boundaries, and difficult to call out and stop even internally within a company or team. I've seen many companies and software do the same thing.

    A genuine opt-in would, in my opinion, look something like a single "hey do you want such-and-such features? these are the implications" question, presented in a non-misleading way, and if that is not answered affirmatively then the various UI elements for "AI" features should not even appear in the UI unless the user goes and changes this setting. It's much harder for that to get modified in questionable ways down the line, and reduces the 'opportunities for misclick' to a single one instead of "every time someone wants to click a button". It also means users aren't constantly pestered with whatever that week's new "AI" thing is if they've shown no interest.

    Such a dialog could still specify something like "if you choose Yes, Firefox will still only download models once you try to use a feature", to make it clear to users that it's not an all-or-nothing, and they can still pick-and-choose after selecting 'Yes'.

    Uncategorized

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    joepie91@fedi.slightly.techJ joepie91@fedi.slightly.tech

    @firefoxwebdevs I can only speak for myself of course, but I'm someone who is strongly opposed to sneaky approaches, like hiding things in submenus or requiring people to go back later to disable new things, for example. And I'm also strongly opposed to basically everything in the current generation of "AI" (LLMs, GenAI, etc.) - but personally I wouldn't consider this sneaky, as it's immediately visible that there's a second choice to make, at the exact moment you disable "AI".

    Of course if that stops being the case and the second option gets hidden behind an "Advanced..." button or foldout for example, it would be sneaky. But in the way it's shown in my mockup, I would consider it fine as it's both proactively presented and immediately actionable.

    (I do still think that exploitative "AI" things should be opt-in rather than opt-out, but it doesn't seem like that's within the scope of options that will be considered by Mozilla, so I'm reasoning within the assumption of an opt-out mechanism here)

    Uncategorized

  • Firefox uses on-device downloaded-on-demand ML models for privacy-preserving translation.
    joepie91@fedi.slightly.techJ joepie91@fedi.slightly.tech

    @firefoxwebdevs My closest answer would be "no", but I think the question is kind of mis-phrased here, and that's probably going to lead to a confusing and potentially misleading outcome.

    The problem that people have is not with "AI" as a generalized category, but with the current generation of thieving, climate-destroying, grifting systems that are marketed as AI to an overwhelming degree - notably LLMs and "generative AI", but really anything with those inconsiderate properties.

    If your kill switch is presented as an "AI kill switch", then depending on the person they're either going to understand that as "exploitative tech", or as "machine learning", and so make different assumptions as to whether local translation is included in that.

    So I think you'll have to be a lot more explicit about what you mean; either by describing clearly what the kill-switch includes, or what it excludes, right in the place where the option is offered. Otherwise it's damned if you do, damned if you don't; depending on whether you include translations, either one or another group is going to be upset with the unexpected behaviour.

    So, ethically, if the translation feature is built on ethically collected data, and it has no outsized climate impact, then I would not consider it something that needs to be included in a "get rid of all of it" kill switch. But to convey this clearly to users, both that and why it isn't included should be explained right there with the button, with potentially a second-step option to disable it anyway if someone still feels uncomfortable with it.

    That way you've transparently communicated to users and shown that you have nothing up your sleeve by immediately and proactively offering them an option to disable that, too, if they have already shown interest in removing "AI" features.

    Uncategorized
  • Anmelden

  • Du hast noch kein Konto? Registrieren

  • Anmelden oder registrieren, um zu suchen
  • Erster Beitrag
    Letzter Beitrag
0
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum