Recent Events for foo.be MainPageDiary (Blog)

FeedCollection

hack.lu 2007

http://www.hack.lu/news.rdf returned no data, or LWP::UserAgent is not available.

adulau SVN

http://a.6f2.net/svnweb/index.cgi/adulau/rss/ returned no data, or LWP::UserAgent is not available.

Michael G. Noll

http://www.michael-noll.com/feed/ returned no data, or LWP::UserAgent is not available.

Justin Mason

2026-01-16

  • 15:11 UTC Reverse engineering my cloud-connected e-scooter and finding the master key to unlock all scootersReverse engineering my cloud-connected e-scooter and finding the master key to unlock all scooters A great example of reverse engineering an Android app and Bluetooth IOT protocol using Frida and root access on an Android device: Android exposes the Java classes android.bluetooth.BluetoothGatt and android.bluetooth.BluetoothGattCallback that apps are expected to use to use GATT characteristics. We can use Frida to hook into these and override many of the interesting functions. I was mostly interested in reads, writes and GATT notifications, so I whipped up a Frida script to hook into these and print all comms to the console [...] The 20-byte value had me suspecting that SHA-1 was somehow being used. To confirm, I wrote another Frida script that hooks Android hashing functions exposed by the Java class java.security.MessageDigest [...] The app uses Firebase for most of its cloud functionality. When signing in and pairing your scooter, the server sends the app a secret key. This is stored on the Android device, and can be read with root access. Tags: frida reverse-engineering android firebase java kotlin gatt bluetooth react-native

2026-01-15

  • 13:41 UTC Why people believe misinformation even when they’re told the factsWhy people believe misinformation even when they’re told the facts "Factchecking is seen as a go-to method for tackling the spread of false information. But it is notoriously difficult to correct misinformation. Evidence shows readers trust journalists less when they debunk, rather than confirm, claims. The work of media scholar Alice Marwick can help explain why factchecking often fails when used in isolation. Her research suggests that misinformation is not just a content problem, but an emotional and structural one: [Marwick] argues that it thrives through three mutually reinforcing pillars: the content of the message, the personal context of those sharing it, and the technological infrastructure that amplifies it: People find it cognitively easier to accept information than to reject it, which helps explain why misleading content spreads so readily; When fabricated claims align with a person’s existing values, beliefs and ideologies, they can quickly harden into a kind of “knowledge”. This makes them difficult to debunk; [When social media platforms] prioritise content likely to be shared, making sharing effortless, every like, comment or forward feeds the [misinformation] system. The platforms themselves act as a multiplier. Tags: misinformation disinformation alice-marwick research psychology social-media fake-news information debunking facts factchecking
  • 09:56 UTC A better way to limit Claude Code (and other coding agents!) access to SecretsA better way to limit Claude Code (and other coding agents!) access to Secrets Bubblewrap, a Linux CLI tool which uses namespaces to sandbox a specific command (and its subprocesses): Bubblewrap lets you run untrusted or semi-trusted code without risking your host system. We’re not trying to build a reproducible deployment artifact. We’re creating a jail where coding agents can work on your project while being unable to touch ~/.aws, your browser profiles, your ~/Photos library or anything else sensitive. Very nice, I hadn't heard of this tool before. The rest of the blog post details how to use it to isolate Claude Code specifically. Tags: claude llms sandboxing linux cli namespaces security infosec trust unix

2026-01-14

  • 10:49 UTC Russian Propaganda Infects AI ChatbotsRussian Propaganda Infects AI Chatbots CEPA: "A Moscow-based global “news” network is leveraging Western artificial intelligence tools to devastating effect": This form of data poisoning is deliberately designed to corrupt the information environments on which AI systems depend. Large language models do not possess an internal understanding of truth. They operate by assessing credibility based on statistical signals, including repetition, apparent consensus, and cross-referencing posts from across the web. Unfortunately, this approach to truth-seeking means an unexpected but structural vulnerability that hostile states have learned to exploit. [...] The West has failed to recognize that it is under sustained information warfare. The United States dismantled the US Information Agency years ago, has steadily weakened Voice of America and Radio Free Europe, and recently scaled back the Foreign Malign Influence Center, even as Russia, China, and Iran made information warfare a core instrument of state power. As AI systems increasingly function as arbiters of fact, this vulnerability becomes a national security danger. It is no longer sufficient for technology companies to disclaim responsibility by reminding users that models can make mistakes. Information security needs to be treated as a core requirement. Tags: propaganda russia misinformation disinformation ai llms web truth

2026-01-08

  • 11:59 UTC Today in “Google broke email”Today in "Google broke email" update on the POP3pocalypse -- it appears that the most likely thing to work in the future will be to use SMTP forwarding to gmail, with ARC headers added. This is a comment thread detailing the rather complex Postfix/OpenARC setup that may do the job. It looks frankly unpleasant Tags: email smtp pop3 gmail arc forwarding postfix openarc

2026-01-06

2026-01-05

  • 11:05 UTC Pi Reliability: Reduce writes to your SD cardPi Reliability: Reduce writes to your SD card Techniques to extend SD card lifespans in Raspberry Pi systems; putting /var/log into RAM is a nice trick Tags: reliability raspberry-pi hardware home sd-cards ram
  • 11:05 UTC Solid state drive – ArchWikiSolid state drive - ArchWiki the Arch Linux wiki page about SSD tuning and enabling TRIM -- extremely detailed and useful! Tags: trim ssd hardware arch-linux linux
  • 11:05 UTC Understanding EV Battery LifeUnderstanding EV Battery Life Ireland's SEAI have published a decent blog post with some real world facts about EV battery lifespans: In 2020 GeoTab, a telematics solution provider, published real world battery data of 6,000 EVs (BEV & PHEV) over millions of days to produce 2 free to use tools that provide invaluable insight into the impact of temperature and SoH of EV batteries in the long term. This real-world data showed the average EV battery lost around 2.3% capacity per year. In other words, a 300km range EV today will have lost 34km in 5yrs. Data also showed that heat & fast-charging (DC charging) is responsible for more battery degradation than age or mileage, so high levels of use i.e. driving or mileage does not appear to be a concern. GeoTab's real world data along with other reports of EVs far surpassing their warranty by multiples of distance, cases of high level of use are plentiful. For example a 2017 Renault Zoe 52kWh, that's in use as a taxi in (hot) Turkey with 345,000Kms on the clock and a near perfect 96% SoH after driving further than an average Irish car's life expectancy. Tags: seai ev batteries cars driving bev

2025-12-18

  • 10:48 UTC _Cheap science, real harm: the cost of replacing human participation with synthetic data_ [pdf]_Cheap science, real harm: the cost of replacing human participation with synthetic data_ [pdf] A new paper from the inimitable Abeba Birhane, on the increasingly common practice of generating synthetic data using LLMs: Driven by the goals of augmenting diversity, increasing speed, reducing cost, the use of synthetic data as a replacement for human participants is gaining traction in AI research and product development. This talk critically examines the claim that synthetic data can “augment diversity,” arguing that this notion is empirically unsubstantiated, conceptually flawed, and epistemically harmful. While speed and cost-efficiency may be achievable, they often come at the expense of rigour, insight, and robust science. Drawing on research from dataset audits, model evaluations, Black feminist scholarship, and complexity science, I argue that replacing human participants with synthetic data risks producing both real-world and epistemic harms at worst and superficial knowledge and cheap science at best. "Synthetic data: stereotypes compressed" is absolutely spot on. This doesn't give insights into human behaviour and beliefs, just into stereotypes. It is increasingly common in social science fields, under the names of "digital twins" and "silicon samples". Tags: data surveys abeba-birhane papers ai synthetic-data digital-twins simulation testing social-science silicon-samples

Paul Graham