Recent Events for foo.be MainPageDiary (Blog)

FeedCollection

hack.lu 2007

http://www.hack.lu/news.rdf returned no data, or LWP::UserAgent is not available.

adulau SVN

http://a.6f2.net/svnweb/index.cgi/adulau/rss/ returned no data, or LWP::UserAgent is not available.

Michael G. Noll

http://www.michael-noll.com/feed/ returned no data, or LWP::UserAgent is not available.

Justin Mason

2025-10-24

  • 12:23 UTC episodic-memory plugin for Claude Codeepisodic-memory plugin for Claude Code "a memory system for Claude that gives it perfect recall of everything it's worked on as far back as you have logs" Tags: memory llms claude claude-code plugins
  • 12:22 UTC asg017/sqlite-vecasg017/sqlite-vec "A vector search SQLite extension that runs anywhere" -- this is nifty. Vector embeddings in an embedded database! Tags: sqlite databases sql search vectors vector-embeddings fuzzy-matching
  • 11:27 UTC Citywest riot raises questions for social media giants – The Irish TimesCitywest riot raises questions for social media giants – The Irish Times This is a huge, huge social problem. People are being paid to hate -- regulation is desperately needed to deal with this: This week’s violence has raised serious questions for some of the main social media platforms. Livestream content depicting violence outside Citywest was broadcast on YouTube, TikTok and Twitch, with streamers rewarded by viewer donations, as they captured protesters shouting racist expletives towards Citywest. In one eight-minute segment of an hour-long livestream I watched on YouTube that night, the user broadcast the burning of the Garda van, referred to migrants in horrific terms and proclaimed they were there to show people “the real truth”. During the video, they received the equivalent of €56 in donations from viewers around the world. The notion that violence can be monetised on social media illustrates a glaring failure of platforms to adequately enforce their own community guidelines around violence. Individuals from the UK and Canada travelled to Ireland specifically to attend and create content from the protest. Other international agitators followed events online. [...] In recent years we have witnessed the mainstreaming of anti-migrant hate and extremism in this country. That has been facilitated, in part, by platforms failing to enforce their own community guidelines. Amid the anger and outrage that follows an alleged sexual assault, it is now a recurring pattern that online platforms will play host to attempts to publish and promote incitement towards hatred and violence. Tags: hate racism monetisation streaming video tiktok youtube facebook twitter x social-media livestreams twitch citywest far-right
  • 09:34 UTC soedinglab/MMseqs2soedinglab/MMseqs2 MMseqs2 (Many-against-Many sequence searching) is a software suite to search and cluster huge protein and nucleotide sequence sets. MMseqs2 is free and open source software implemented in C++ for Linux, MacOS, and (as beta version, via cygwin) Windows. The software is designed to run on multiple cores and servers and exhibits very good scalability. MMseqs2 can run 10000 times faster than BLAST. At 100 times its speed it achieves almost the same sensitivity. It can perform profile searches with the same sensitivity as PSI-BLAST at over 400 times its speed. I was just remembering using BLAST to discover anti-spam rulesets the other day! If I was still working on rule discovery for SpamAssassin these days, this would be very nifty tech. (via James McInerney) Tags: mmseq sequences bioinformatics algorithms oss blast discovery search fuzzy-matching rules antispam

2025-10-23

  • 15:10 UTC Adrian Cockroft’s take on the AWS outageAdrian Cockroft's take on the AWS outage "n my opinion the root cause of the recent AWS outage is their architectural decision to have everything depend on the same instance of DynamoDB, including operation of DynamoDB itself. This is a circular dependency, and the ability to observe and fix the failure as it happened also failed. The ability of customers to file service reports failed. So the engineers trying to figure out what was happening were completely blind. It took them an hour to figure out what had broken and another hour to fix it, then the pent up demand rushing in broke other key services for another 12 hours or so. If DNS had been misconfigured on a different non-critical service, I think it would have been obvious to detect and quick and easy to fix. However, anything going wrong that also takes out the ability to see it going wrong and fix it, is a liability. To break the circular dependency, I think there needs to be a separate, internal only, set of services and data stores that the most critical AWS services use, and which are designed to come up without dependencies on public interfaces. Maybe an internal region, inside each public region, but with a simpler implementation that has few carefully managed dependencies. Otherwise, it’s just a matter of time until this happens again." Tags: adrian-cockroft outages post-mortems aws amazon us-east-1 dynamodb circular-dependencies depe
  • 15:09 UTC Summary of the Amazon DynamoDB Service Disruption in Northern Virginia (US-EAST-1) RegionSummary of the Amazon DynamoDB Service Disruption in Northern Virginia (US-EAST-1) Region Postmortem writeup of this week's massive AWS us-east-1 outage. tl;dr: DynamoDB runs into a consistency failure in an internal DNS optimization service; EC2 provisioning depends on DynamoDB and craps out; Network load balancers screw up due to impact of EC2 outage. Tags: dynamodb dns aws ec2 nlb outages post-mortems cloud-computing amazon us-east-1

2025-10-21

  • 10:02 UTC Today is when Amazon brain drain finally caught up with AWS • The RegisterToday is when Amazon brain drain finally caught up with AWS • The Register Corey "Last Week In AWS" Quinn really getting the boot in on AWS after yesterday's gigantic us-east-1 outage: AWS has given increasing levels of detail, as is their tradition, when outages strike, and as new information comes to light. Reading through it, one really gets the sense that it took them 75 minutes to go from "things are breaking" to "we've narrowed it down to a single service endpoint, but are still researching," which is something of a bitter pill to swallow. To be clear: I've seen zero signs that this stems from a lack of transparency, and every indication that they legitimately did not know what was breaking for a patently absurd length of time. [...] At the end of 2023, Justin Garrison left AWS and roasted them on his way out the door. He stated that AWS had seen an increase in Large Scale Events (or LSEs), and predicted significant outages in 2024. It would seem that he discounted the power of inertia, but the pace of senior AWS departures certainly hasn't slowed — and now, with an outage like this, one is forced to wonder whether those departures are themselves a contributing factor. You can hire a bunch of very smart people who will explain how DNS works at a deep technical level (or you can hire me, who will incorrect you by explaining that it's a database), but the one thing you can't hire for is the person who remembers that when DNS starts getting wonky, check that seemingly unrelated system in the corner, because it has historically played a contributing role to some outages of yesteryear. When that tribal knowledge departs, you're left having to reinvent an awful lot of in-house expertise that didn't want to participate in your RTO games, or play Layoff Roulette yet again this cycle. This doesn't impact your service reliability — until one day it very much does, in spectacular fashion. I suspect that day is today. Ouch. This is a very painful read and I'd say AWS are not happy to see it.... Tags: aws amazon layoffs tech how-we-work lses outages us-east-1 rto brain-drain work

2025-10-19

  • 15:16 UTC Linux Capabilities instead of setuidLinux Capabilities instead of setuid This seems like a pretty poor idea for Linux to have implemented: The command setcap sets file capabilities on an executable. The cap_setuid capability allows a process to make arbitrary manipulations of user IDs (UIDs), including setting the UID to a value that would otherwise be restricted (i.e. UID 0, the root user). setcap takes a set of parameters, where e: Effective means the capability is activated; p: Permitted means the capability can be used/is allowed. Putting this together, we’re adding the cap_setuid capabilities to the Python binary: setcap cap_setuid+ep /usr/bin/python3.12 And hey presto, "/usr/bin/python3 -c 'import os;os.setuid(0);os.system("/bin/bash")'" now works. Ouch Tags: linux permissions setuid capabilities setcap infosec security root

2025-10-17

  • 11:23 UTC Obituary: Farewell to robots.txt (1994-2025)Obituary: Farewell to robots.txt (1994-2025) It is with deep sorrow that we announce the end of robots.txt, the humble text file that served as the silent guardian of digital civility for thirty years. Born on February 1, 1994, out of necessity when Martijn Koster’s server crashed under a faulty crawler named “Websnarf,” robots.txt passed away in July 2025, not by Cloudflare’s hand, but from the consequences of systematic disregard by AI corporations. The protocol taught us that technology can be based on human values like ethics and morality. It showed that voluntary compliance works when all parties benefit. Its greatest achievement was perhaps preserving the internet for three decades from what it has become today – a soulless extraction machine. Tags: internet history robots.txt crawlers web obituaries protocols ai via:mariafarrell
  • 11:05 UTC LOTOLOTO TIL about "LOTO" -- "Lock Out Tag Out". This is basically a physical mutex lock -- each worker has their own padlock which they attach to dangerous equipment in order to ensure that it can't be turned on (potentially killing someone) while it's being worked on; once they've completed the high-risk task, they then remove their own lock. Removing or damaging someone else's lock is considered an Extremely Big Deal and liable to get that person fired. Tags: loto mutex locks workplaces osha safety via:ChristinaB

Paul Graham