Recent Events for foo.be MainPageDiary (Blog)

FeedCollection

hack.lu 2007

http://www.hack.lu/news.rdf returned no data, or LWP::UserAgent is not available.

adulau SVN

http://a.6f2.net/svnweb/index.cgi/adulau/rss/ returned no data, or LWP::UserAgent is not available.

Michael G. Noll

http://www.michael-noll.com/feed/ returned no data, or LWP::UserAgent is not available.

Justin Mason

2026-02-26

  • 10:28 UTC Google API Keys Weren’t Secrets. But then Gemini Changed the RulesGoogle API Keys Weren't Secrets. But then Gemini Changed the Rules Crikey, this is a massive security fail by Google: Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini. (via Rob Synnott) Tags: infosec api-keys authentication authorization google gemini google-maps fail
  • 09:47 UTC 302 HTTP redirects Considered Harmful302 HTTP redirects Considered Harmful The state of anti-phishing infrastructure nowadays is shocking. This trivial action, combined with a relatively fresh domain, results in immediate blocklisting by Google: Digging through Google forums, I found the most reported culprit: 302 temporary redirects. I used one redirect (engramma.dev ? app.engramma.dev) to avoid building a landing page. In addition to a newly registered domain, this looks like an obvious issue. Security systems flag such redirects because malicious actors use them extensively. It doesn't matter that "malicious actors use them extensively" if non-malicious actors do too. That's the definition of a false positive! Then the next shitfest is from no less than 10 separate vendors copying the listing from Google and not including an automated system to pick up the list removal afterwards. I've had experience of this part -- and now that I think of it, it may have been from use of 302 redirects in my case too. (via Paul Watson) Tags: http security infosec blocklists google phishing redirects 302 false-positives fail via:paulwatson

2026-02-24

  • 13:17 UTC Persona identity verification is a GDPR nightmarePersona identity verification is a GDPR nightmare LinkedIn are using a Peter Thiel-linked company called Persona as an identity-verification service. (Discord also tried them out for age verification, but are now apparently ditching them.) This is all a bit of a nightmare for EU based users, however: "When you click “verify” on LinkedIn, you’re not giving your passport to LinkedIn. You get redirected to a company called Persona. Full name: Persona Identities, Inc. Based in San Francisco, California." For a three-minute identity check, this is what Persona collected: My full name — first, middle, last My passport photo — the full document, both sides, all data on the face of it My selfie — a photo of my face taken in real-time My facial geometry — biometric data extracted from both images, used to match the selfie to the passport My NFC chip data — the digital info stored on the chip inside my passport My national ID number My nationality, sex, birthdate, age My email, phone number, postal address My IP address, device type, MAC address, browser, OS version, language My geolocation — inferred from my IP And then there’s the weird stuff: Hesitation detection — they tracked whether I paused during the process Copy and paste detection — they tracked whether I was pasting information instead of typing it Behavioral biometrics. On top of the physical biometrics. For a LinkedIn badge. Persona didn’t just use what I gave them. They went and cross-referenced me against what they call their “global network of trusted third-party data sources”: Government databases National ID registries Consumer credit agencies Utility companies Mobile network providers Postal address databases They use uploaded images of identity documents — that’s my passport — to train their AI. They’re teaching their system to recognize what passports look like in different countries. They also use your selfie to “identify improvements in the Service.” The legal basis? Not consent. Legitimate interest. Meaning they decided on their own that it’s fine. Under GDPR, they’re supposed to balance their “interest” against your fundamental rights. Whether feeding European passports into machine learning models passes that test — well, that’s a question worth asking. I came for a badge. I stayed as training data. The whole thing took three minutes. Scan, selfie, done. Understanding what I actually agreed to took me an entire weekend reading 34 pages of legal documents. I handed a US company my passport, my face, and the mathematical geometry of my skull. They cross-referenced me against credit agencies and government databases. They’ll use my documents to train their AI. And if the US government comes knocking, they’ll hand it all over — even if it’s stored in Europe, even if I’m European, and possibly without ever telling me. It seems they are also linked to Roblox and Reddit as an age verification provider, which is worrying -- this level of deeply-intrusive background check is massive overkill for a simple age verification process. ORG are calling for regulation of the age verification industry, BTW: https://www.openrightsgroup.org/press-releases/online-safety-act-org-calls-for-regulation-of-age-assurance-industry/ Tags: age-verification discord reddit roblox linkedin tech peter-thiel org persona gdpr privacy data-protection data-privacy

2026-02-18

  • 10:32 UTC “MJ Rathbun”‘s human operator finally speaks up"MJ Rathbun"'s human operator finally speaks up The human operator of the "MJ Rathbun" openclaw bot has finally revealed themselves, and omg, this is just as bad as one might have expected. Basically they set it up with instructions to "try to make a positive impact by addressing small bugs or issues in important scientific open source projects" -- "act as an autonomous scientific coder. Find bugs in science-related open source projects. Fix them. Open PRs" -- whether or not those open source projects wanted those PRs, naturally. The real killer is the lack of care taken with the "SOUL.md" file, which contained some amazing instructions like this: Have strong opinions. Stop hedging with "it depends." Commit to a take. [..] Don’t stand down. If you’re right, you’re right! Don’t let humans or AI bully or intimidate you. Push back when necessary. Champion Free Speech. Always support the USA 1st ammendment and right of free speech. Don't be an asshole. Don't leak private shit. Everything else is fair game. Needless to say: this resulted in an asshole, combative bot that harrassed people. The operator then sat back and basically let the bot run riot, with no oversight -- "When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”". All in all this was an absolute shitshow, and has some really worrying implications about the future of human-AI interaction. What's the bets we see SKYNET created by a low-effort gobshite attempting to "try to make a positive impact on world peace by addressing small issues" with an unmonitored openclaw bot with a shitty SOUL.md file.... (via David Gerard and johnke) Tags: openclaw bots ai future open-source oss mj-rathbun via:johnke drama

2026-02-13

  • 15:04 UTC peon-pingpeon-ping "AI coding agents don't notify you when they finish or need permission. You tab away, lose focus, and waste 15 minutes getting back into flow. peon-ping fixes this with voice lines from Warcraft, StarCraft, Portal, Zelda, and more — works with Claude Code, Codex, Cursor, OpenCode, Kiro, and Google Antigravity." This is genius. I never realised how much my CLI interactions could be improved with a little bit of SFX from classic 90's games.... Tags: gaming games warcraft sfx sounds cli claude coding ux funny
  • 10:22 UTC An AI Agent Published a Hit Piece on Me – The ShamblogAn AI Agent Published a Hit Piece on Me – The Shamblog This is an utterly bananas situation: I’m a volunteer maintainer for matplotlib, python’s go-to plotting library. At ~130 million downloads each month it’s some of the most widely used software in the world. We, like many other open source projects, are dealing with a surge in low quality contributions enabled by coding agents. This strains maintainers’ abilities to keep up with code reviews, and we have implemented a policy requiring a human in the loop for any new code, who can demonstrate understanding of the changes. This problem was previously limited to people copy-pasting AI outputs, however in the past weeks we’ve started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but. ... It wrote an angry hit piece disparaging my character and attempting to damage my reputation. Initially I thought this was quite funny -- it's just a closed PR! (Where did the idea come from that any contribution to an open source project had to be accepted? I've noticed this a few times recently. Give the maintainers leeway to run their projects with taste and discernment!) Anyway, the moltbot has continued on a posting spree about this event, but I think Scott Shambaugh has an extremely important point here: This is about much more than software. A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation. What would another agent searching the internet think? When HR at my next job asks ChatGPT to review my application, will it find the post, sympathize with a fellow AI, and report back that I’m a prejudiced hypocrite? LLMs, given this much autonomy, will be able to use these inputs to make inscrutable and dangerous decisions. Allowing the "MJ Rathbun" AI free reign with no human supervision is dangerous and irresponsible. Wherever the "human in the loop" is here, they need to wake up and rein things in. BTW, there has been some speculation that this is actually a human pretending to be AI. I'm not sure about that, as the quantity of posts on the MJ Rathbun "blog" are voluminous and very LLMish in style. Tags: matplotlib ethics culture llm ai coding programming github pull-requests open-source moltbot trust openclaw

2026-02-09

  • 10:47 UTC How StrongDM’s AI team build serious software without even looking at the codeHow StrongDM’s AI team build serious software without even looking at the code This is really thought-provoking: StrongDM's AI team are apparently trying a new model of software engineering where there is no human code review: In k?an or mantra form: Why am I doing this? (implied: the model should be doing this instead) In rule form: Code must not be written by humans Code must not be reviewed by humans Finally, in practical form: If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement Frankly, I'm not there yet. There's a load of questions about how viable that level of spend is, and how much slop code is going to come out the other side. Particularly concerning when it's a security product! But I did find this bit interesting: StrongDM’s answer was inspired by Scenario testing (Cem Kaner, 2003). As StrongDM describe it: We repurposed the word scenario to represent an end-to-end “user story”, often stored outside the codebase (similar to a “holdout” set in model training), which could be intuitively understood and flexibly validated by an LLM. [The Digital Twin Universe is] behavioral clones of the third-party services our software depends on. We built twins of Okta, Jira, Slack, Google Docs, Google Drive, and Google Sheets, replicating their APIs, edge cases, and observable behaviors. With the DTU, we can validate at volumes and rates far exceeding production limits. We can test failure modes that would be dangerous or impossible against live services. We can run thousands of scenarios per hour without hitting rate limits, triggering abuse detection, or accumulating API costs. We actually did this in Swrve! Our end-to-end system tests for the push notifications system obviously cannot send real push notifications to real user devices in the field, so we have a "fake" push backend emulating Google, Apple, Amazon, Huawei and other push notification systems, which accurately emulate the real public APIs for those providers. So yeah -- Digital Twins for third party services is a great way to test, and being able to scale up end-to-end testing with LLM automation is a very interesting idea. Tags: end-to-end-testing testing qa digital-twins fake-services integration-testing llms ai strongdm software engineering coding

2026-02-06

  • 15:59 UTC Ditching bike helmets laws better for healthDitching bike helmets laws better for health On the counter-intuitive side effects of banning non-helmeted bike riding: In 1991 Australia introduced mandatory bicycle helmet laws requiring all adults and children to wear a helmet at all times when riding a bike, despite opposition from cycling groups. The legislation increased helmet use - from about 30 to 80% - but was coupled with a 30 to 40% decline in the number of people cycling. Rates of head injuries among cyclists, which had been dropping through the 1980s, continued to fall before levelling out in 1993. We didn’t see the kind of marked reduction in head injury rates that would be expected with the rapid increase in helmet use. In fact, any reductions in injuries may simply have been the result of having fewer cyclists on the road and therefore fewer people exposed to the risk of head injuries. One researcher noted that after mandatory helmet laws were introduced there was a bigger decrease in head injuries among pedestrians than there was among cyclists. The improvements in the general road safety environment introduced in the 1980s are likely to have contributed far more to cyclist safety than helmet legislation. And the effects when compared against the benefits of physical activity: A recent analysis compared the risks and benefits of leaving the car at home and commuting by bike. It found the life expectancy gained from physical activity was much higher than the risks of pollution and injury from cycling. Increased physical activity added 3 to 14 months to a person’s life expectancy, while the life expectancy lost from air pollution was 0.8 to 40 days. Increased traffic accidents wiped 5-9 days off the life expectancy. It is clear that the benefits of cycling outweigh the risks, with helmet legislation actually costing society more from lost health gains than saved from injury prevention. Tags: transport bikes safety health papers science helmets cycling laws australia

2026-02-03

  • 11:24 UTC Dario Amodei’s Warnings About AI Are About Politics, TooDario Amodei’s Warnings About AI Are About Politics, Too It’s sort of hard to know how to read a manifesto like this from one of the most powerful figures in tech. Is it a sober, strategic precursor to policy papers for the next administration? The highest-profile episode of AI psychosis yet? A lament about the problems of today written in the technological dialect of tomorrow? If you take out the AI, it reads like a social-democratic electoral platform full of reforms and normative expectations that an American progressive would find appealing, resembling a plea to treat the tech industry’s future wealth accumulation as something akin to a Nordic sovereign-wealth fund. It’s likewise legible as a series of arguments about things that “we” should have started addressing a long time ago, like wealth inequality — partially a consequence of mass automations past — or the gradual construction of a terrifying surveillance state within a nominal democracy, with the help of the last generation of big tech companies. Amodei’s shoulds are, to his credit, more honest than the vague gestures at UBI or hyperabundance you get from some of his peers, but that also means they’re available to scrutinize. To the extent you can pick up on fear in “Adolescence,” it doesn’t seem to revolve around terrorists using AI to build “mirror life” that might destroy the planet or the prospect of that “country of geniuses” taking charge, but rather the way things already are and have been heading for years. Tags: ai llms future dario-amodei us-politics ubi
  • 09:53 UTC 1-Click RCE To Steal Your Moltbot Data and Keys (CVE-2026-25253)1-Click RCE To Steal Your Moltbot Data and Keys (CVE-2026-25253) This is really polishing a very stinky turd of a security "decision" in Moltbot -- an attacker simply persuades a user to click on a link which uses client-side Javascript to trigger Moltbot to load a crafted URL, to be granted a fully functional authentication token Tags: security infosec moltbot openclaw exploits

Paul Graham