• 0 Posts
  • 41 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle


  • This is the theme of almost all of the “toppling”. Mostly they’ve just… resigned. They probably keep all the perks, and then take up a corporate advisor position once there’s less heat.

    Headlines like this make it sound like there’s been real impact beyond generating articles about a few of the more public figures. But reading article, it’s really just a few politicians and bureaucrats resigning. Mandelson’s firing was already months ago. The investigation into a former Norwegian PM sounds like that’s as harsh as it’s got so far for politicians this time. And nothing except one law firm board member resigning for private companies?

    They’re all getting away with it, and all the victims get is a hundred headlines about Musk being named in the files, and having their lives endangered from the terrible Don-centric redaction.




  • Perhaps something like this: https://lemmy.world/post/42528038/22000735

    Deferring responsibility for risk and compliance obligations to AI is bound to lead to some kind of tremendous problem, that’s a given. But the EU has been pretty keen lately on embedding requirements into their newer digital-realm laws about giving market regulators access to internal documents on request.

    This is not to suggest it’s anywhere close to a certainty that an Enron will happen though. There is still the exceptionally large problem of company executives are part of the power structures which selectively target who to prosecute, if their lobbyists haven’t already succeeded in watering the legislation down to be functionally useless before they’re in force. So it will take a huge fuck up and public pressure for any of the countries to make good on their own rules.

    Given that almost all the media heat from the Trump-Epstein files has been directed at easy target single public personalities, completely ignoring the obvious systemic corruption by multiple corporate entities, I don’t have high hopes for that part. But if the impending fuck up and scale of noncompliance is big enough, there’s a chance there will be audits and EU courts involved.



  • I read both of these and what struck me was how both studies felt remarkably naive. I found myself thinking: “there’s no way the authors have any background in the humanities”. Turns out there’s 2 authors, lo and behold, both with computer science degrees. This might explain why it feels like they’re somehow incredulous at the results - they’ve approached the problem as evaluating a system’s fitness in a vacuum.

    But it’s not a system in a vacuum. It’s a vacuum that has sucked up our social system, sold to bolster the social standing of the heads of a social construct.

    Had they looked at the context of how AI has been marketed, as an authoritative productivity booster, they might have had some idea why both disempowerment and reduced mastery could be occurring: The participants were told to work fast and consult the AI. What a shock that people took the responses seriously and didn’t have time to learn!

    I’d ask why Anthropic had computer scientists conducting sociological research, but I assume this part of output has just been published to assuage criticism of trust and safety practices. The final result will probably be adding another line of ‘if query includes medical term then print “always ask a doctor first”’ to the system prompt.

    This constant vacillation between “it’s a revolution and changes our entire reality!” and “you can’t trust it and you need to do your own research” from the AI companies is fucking tiresome. You can’t have it both ways.







  • Commenting “This raises important questions worth discussing. The details matter here and I think we need more transparency around how decisions like this get made.” on an illustration of a hedgehog on a bicycle pulling baby hedgehogs in an egg carton AND on a post about using bullet measurements to help Americans visualize 1cm? Slop.



  • It’s overall a good write up. I think for me there’s are some pieces missing though, which I would love to see further explored, although it is not possible yet to fully do so. For example:

    • Why are Jeri Laber’s files on Yugoslavia between 1980-1984 restricted until 2060?
    • Why is her correspondence with the dept of state between 84 and 87 restricted until 2060?
    • Why are there almost no files listed at all from HRW between 1984 and 1990 for Yugoslavia?
    • Why did she go there in 1988?
    • Why are Ivana Nizich’s 1991 files on Yugoslavia and the World Bank/IMF information restricted until 2067?
    • Why are Amnesty International docs by HRW also restricted?

    If it weren’t for the now releases CIA files from the 80s and 90s, you’d think the area had simply disappeared entirely.


  • Who needs pure AI model collapse when you can have journalists give it a more human touch? I caught this snippet from the Australian ABC about the latest Epstein files drop

    screenshot of ABC result in Google search  listing wrong Boris for search term '23andme Boris nikolic'

    The Google AI summary does indeed highlight Boris Nikolić the fashion designer if you search for only that name. But I’m assuming this journalist was using ChatGPT, because if you see the Google summary, it very prominently lists his death in 2008. And it’s surprisingly correct! A successful scraping of Wikipedia by Gemini, amazing.

    But the Epstein email was sent in 2016.

    Dors the journalist perhaps think it more likely is the Boris Nikolić who is the biotech VC, former advisor for Bill Gates and named in Epstein’s will as the “successor executor”? Info literally all in the third Google result, even in the woeful state of modern Google. Pushed past the fold by the AI feature about the wrong guy, but not exactly buried enough for a journalist to have any excuse.