department of hack
1104 stories
·
15 followers

A Spoiler-Filled Review of Finishing The Wheel of Time at 39

1 Share
I usually read 50 or 60 books a year. Not in 2019. In 2019, I read The Wheel of Time. I've started saying "burn me" and "light" in public. I think even "bloody ashes" slipped out once. I know how to get from Tear to Tar Valon without a map. I know how much toh I have with my parents. It's a lot. By ...
Read the whole story
brennen
4 days ago
reply
Boulder, CO
Share this story
Delete

When the revolution comes, I’ll lead a special squad to wreck, just absolu...

1 Share

When the revolution comes, I’ll lead a special squad to wreck, just absolutely PC LOAD LETTER, these fucking gas pumps that play ads and “content” at me.

Read the whole story
brennen
6 days ago
reply
Boulder, CO
Share this story
Delete

From context collapse to content collapse

1 Share

When social media was taking shape fifteen-odd years ago, the concept of “context collapse” helped frame and explain the phenomenon. Young scholars like Danah Boyd and Michael Wesch, building on the work of Joshua Meyrowitz, Erving Guffman, and other sociologists and media theorists, argued that networks like Friendster, MySpace, YouTube, and, later, Facebook and Twitter were dissolving the boundaries between social groups that had long shaped personal relations and identities. Before social media, you spoke to different “audiences” — family members, friends, colleagues, and so forth — in different ways. You modulated your tone of voice, your words, your behavior, and even your appearance to suit whatever social “context” you were in (workplace, home, school, nightclub, etc.) and then readjusted the presentation of yourself when you moved into another context.

On a social network, the theory went, all those different contexts collapsed into a single context. Whenever you posted a message or a photograph or a video, it could be seen by your friends, your parents, your coworkers, your bosses, and your teachers, not to mention the amorphous mass known as the general public. And, because the post was recorded, it could be seen by future audiences as well as the immediate one. When people realized they could no longer present versions of themselves geared to different audiences — it was all one audience now — they had to grapple with a new sort of identity crisis. Wesch described the experience in suitably melodramatic terms in an influential 2009 article about the pioneering vloggers on YouTube:

The problem is not a lack of context. It is context collapse: an infinite number of contexts collapsing upon one another into that single moment of recording. The images, actions, and words captured by the lens at any moment can be transported to anywhere on the planet and preserved (the performer must assume) for all time. The little glass lens becomes the gateway to a black hole sucking all of time and space — virtually all possible contexts —in on itself. The would-be vlogger, now frozen in front of this black hole of contexts, faces a crisis of self-presentation.

As everyone rushed to join Facebook and other social networks, context collapse and the attendant crisis of self-presentation became universal. In a 2010 interview with the journalist David Kirkpatrick, Facebook founder Mark Zuckerberg put it bluntly: “You have one identity. The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.” Zuckerberg praised context collapse as a force for moral cleanliness: “Having two identities for yourself is an example of a lack of integrity.” Facebook forces us to be pure.

But just as Zuckerberg was declaring context collapse an inevitability, the public rebelled. Desiring to keep social spheres separate, people began looking for ways to reestablish the old social boundaries within the new media environment. We decided — most of us, anyway — that we don’t want all the world to be our stage, at least not all the time. We want to perform different parts on different stages for different audiences. We’re happier as character actors than as stars.

The recent history of social media isn’t a story of context collapse. It’s a story of its opposite: context restoration. Young people led the way, moving much of their online conversation from the public platform of Facebook, where parents and teachers lurked, to the more intimate platform of Snapchat, where they could restrict their audience and where messages disappeared quickly. Private accounts became popular on other social networks as well. Group chats and group texts proliferated. On Instagram, people established pseudonymous accounts — fake Instagrams, or finstas — limited to their closest friends. Responding to the trend, Facebook itself introduced tools that allow members to restrict who can see a post and to specify how long the post stays visible. (Apparently, Zuckerberg has decided he’s comfortable undermining the integrity of the public.)

Context collapse remains an important conceptual lens, but what’s becoming clear now is that a very different kind of collapse — content collapse — will be the more consequential legacy of social media. Content collapse, as I define it, is the tendency of social media to blur traditional distinctions among once distinct types of information — distinctions of form, register, sense, and importance. As social media becomes the main conduit for information of all sorts — personal correspondence, news and opinion, entertainment, art, instruction, and on and on — it homogenizes that information as well as our responses to it.

Content began collapsing the moment it began to be delivered through computers. Digitization made it possible to deliver information that had required specialized mediums — newspapers and magazines, vinyl records and cassettes, radios, TVs, telephones, cinemas, etc. — through a single, universal medium. In the process, the formal standards and organizational hierarchies inherent to the old mediums began to disappear. The computer flattened everything.

I remember, years ago, being struck by the haphazardness of the headlines flowing through my RSS reader. I’d look at the latest update to the New York Times feed, for instance, and I’d see something like this:

Dam Collapse Feared as Flood Waters Rise in Midwest
Nike’s New Sneaker Becomes Object of Lust
Britney Spears Cleans Up Her Act
Scores Dead in Baghdad Car-Bomb Attack
A Spicy New Take on Bean Dip

It wasn’t just that the headlines, free-floating, decontextualized motes of journalism ginned up to trigger reflexive mouse clicks, had displaced the stories. It was that the whole organizing structure of the newspaper, its epistemological architecture, had been junked. The news section (with its local, national, and international subsections), the sports section, the arts section, the living section, the opinion pages: they’d all been fed through a shredder, then thrown into a wind tunnel. What appeared on the screen was a jumble, high mixed with low, silly with smart, tragic with trivial. The cacophony of the RSS feed, it’s now clear, heralded a sea change in the distribution and consumption of information. The new order would be disorder.

The collapse gained momentum after Facebook introduced its News Feed in 2006. To a dog’s breakfast of news headlines, the News Feed added a dog’s breakfast of personal posts and messages and then mixed in another dog’s breakfast of sponsored posts and ads. It looked, smelled, and tasted like the meal Brad Pitt feeds his pitbull in Once Upon a Time … in Hollywood. After a brief period of complaining, with the usual and empty #deletefacebook threats, the public embraced the News Feed. The convenience of getting all content of interest through a single stream — no need to jump from site to site anymore — overrode the initial concerns. Now, everything would take the form of an “update.”

In discussing the appeal of the News Feed in that same interview with Kirkpatrick, Zuckerberg observed, “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” The statement is grotesque not because it’s false — it’s true, actually — but because it’s a category error. It yokes together in an obscene comparison two events of radically different scale and import. And yet, in his tone-deaf way, Zuckerberg managed to express the reality of content collapse. When it comes to information, social media renders category errors obsolete.

The rise of the smartphone has completed the collapse of content. The diminutive size of the device’s screen further compacted all forms of information. The instant notifications and infinite scrolls that became the phone’s default design standards required that all information be rendered in a way that could be taken in at a glance, further blurring the old distinctions between types of content. Now all information belongs to a single category, and it all pours through a single channel.

Many of the qualities of social media that make people uneasy stem from content collapse. First, by leveling everything, social media also trivializes everything — freed of barriers, information, like water, pools at the lowest possible level. A presidential candidate’s policy announcement is given equal weight to a snapshot of your niece’s hamster and a video of the latest Kardashian contouring. Second, as all information consolidates on social media, we respond to it using the same small set of tools the platforms provide for us. Our responses become homogenized, too. That’s true of both the form of the responses (repost, retweet, like, heart, hashtag, fire emoji) and their content (Love! Hate! Cringe!). The software’s formal constraints place tight limits on our expressiveness, no matter what we’re talking about.

Third, content collapse puts all types of information into direct competition. The various producers and providers of content, from journalists to influencers to politicians to propagandists, all need to tailor their content and its presentation to the algorithms that determine what people see. The algorithms don’t make formal or qualitative distinctions; they judge everything by the same criteria. And those criteria tend to promote oversimplification, emotionalism, tendentiousness, orthodoxy — the qualities that make a piece of information stand out, at least momentarily, from the screen’s blur.

Finally, content collapse consolidates power over information, and conversation, into the hands of the small number of companies that own the platforms and write the algorithms. The much maligned gatekeepers of the past could exert editorial control only over a particular type of content that flowed through a particular medium — a magazine, a radio station, a TV network. Our new gatekeepers control information of all kinds. When content collapses, there’s only one gate.


Read the whole story
brennen
6 days ago
reply
Boulder, CO
Share this story
Delete

Crossfade Dissonance

2 Shares

@pamela :

I will never, ever tire of seamlessly transitioning from the end of Mean Girls to the beginning of Hackers with the same song, this was a damn *gift* given to us by the movie industry

@mhoye :

@pamela Has somebody actually crossfaded the video for this?

@pamela :

@mhoye not that I’ve found, but I live in hope…

@kiethzg :

@pamela @mhoye Sounds like a fun little project to start off my weekend with!
@pamela @mhoye I actually got distracted with even sillier things, but! Finally did this. Then watched it on a loop for a bit. Then remembered I should actually upload it somewhere! So here it is:

I really love the idea of jumping from movie to completely unrelated movie through a common song and a smooth soundtrack crossfade. The only rule, really, is that the song you jump into a movie with has to be earlier in the movie than the one you jump out with. Anyone out there got a dataset of movie soundtracks I could use to cobble together an Oracle Of Bacon-like tool for figuring out the forward soundtrack distance between movies?

Read the whole story
brennen
14 days ago
reply
Boulder, CO
Share this story
Delete

Can We Build Trustable Hardware?

4 Shares

Why Open Hardware on Its Own Doesn’t Solve the Trust Problem

A few years ago, Sean ‘xobs’ Cross and I built an open-source laptop, Novena, from the circuit boards up, and shared our designs with the world. I’m a strong proponent of open hardware, because sharing knowledge is sharing power. One thing we didn’t anticipate was how much the press wanted to frame our open hardware adventure as a more trustable computer. If anything, the process of building Novena made me acutely aware of how little we could trust anything. As we vetted each part for openness and documentation, it became clear that you can’t boot any modern computer without several closed-source firmware blobs running between power-on and the first instruction of your code. Critics on the Internet suggested we should have built our own CPU and SSD if we really wanted to make something we could trust.

I chewed on that suggestion quite a bit. I used to be in the chip business, so the idea of building an open-source SoC from the ground-up wasn’t so crazy. However, the more I thought about it, the more I realized that this, too was short-sighted. In the process of making chips, I’ve also edited masks for chips; chips are surprisingly malleable, even post tape-out. I’ve also spent a decade wrangling supply chains, dealing with fakes, shoddy workmanship, undisclosed part substitutions – there are so many opportunities and motivations to swap out “good” chips for “bad” ones. Even if a factory could push out a perfectly vetted computer, you’ve got couriers, customs officials, and warehouse workers who can tamper the machine before it reaches the user. Finally, with today’s highly integrated e-commerce systems, injecting malicious hardware into the supply chain can be as easy as buying a product, tampering with it, packaging it into its original box and returning it to the seller so that it can be passed on to an unsuspecting victim.

If you want to learn more about tampering with hardware, check out my presentation at Bluehat.il 2019.

Based on these experiences, I’ve concluded that open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all. While open hardware has the opportunity to empower users to innovate and embody a more correct and transparent design intent than closed hardware, at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed. Even if we published the complete mask set for a modern billion-transistor CPU, this “source code” is meaningless without a practical method to verify an equivalence between the mask set and the chip in your possession down to a near-atomic level without simultaneously destroying the CPU.

So why, then, is it that we feel we can trust open source software more than closed source software? After all, the Linux kernel is pushing over 25 million lines of code, and its list of contributors include corporations not typically associated with words like “privacy” or “trust”.

The key, it turns out, is that software has a mechanism for the near-perfect transfer of trust, allowing users to delegate the hard task of auditing programs to experts, and having that effort be translated to the user’s own copy of the program with mathematical precision. Thanks to this, we don’t have to worry about the “supply chain” for our programs; we don’t have to trust the cloud to trust our software.

Software developers manage source code using tools such as Git (above, cloud on left), which use Merkle trees to track changes. These hash trees link code to their development history, making it difficult to surreptitiously insert malicious code after it has been reviewed. Builds are then hashed and signed (above, key in the middle-top), and projects that support reproducible builds enable any third-party auditor to download, build, and confirm (above, green check marks) that the program a user is downloading matches the intent of the developers.

There’s a lot going on in the previous paragraph, but the key take-away is that the trust transfer mechanism in software relies on a thing called a “hash”. If you already know what a hash is, you can skip the next paragraph; otherwise read on.

A hash turns an arbitrarily large file into a much shorter set of symbols: for example, the file on the left is turned into “🐱🐭🐼🐻” (cat-mouse-panda-bear). These symbols have two important properties: even the tiniest change in the original file leads to an enormous change in the shorter set of symbols; and knowledge of the shorter set of symbols tells you virtually nothing about the original file. It’s the first property that really matters for the transfer of trust: basically, a hash is a quick and reliable way to identify small changes in large sets of data. As an example, the file on the right has one digit changed — can you find it? — but the hash has dramatically changed into “🍑🐍🍕🍪” (peach-snake-pizza-cookie).

Because computer source code is also just a string of 1’s and 0’s, we can also use hash functions on computer source code, too. This allows us to quickly spot changes in code bases. When multiple developers work together, every contribution gets hashed with the previous contribution’s hashes, creating a tree of hashes. Any attempt to rewrite a contribution after it’s been committed to the tree is going to change the hash of everything from that point forward.

This is why we don’t have to review every one of the 25+ million lines of source inside the Linux kernel individually – we can trust a team of experts to review the code and sleep well knowing that their knowledge and expertise can be transferred into the exact copy of the program running on our very own computers, thanks to the power of hashing.

Because hashes are easy to compute, programs can be verified right before they are run. This is known as closing the “Time-of-Check vs Time-of-Use” (TOCTOU) gap. The smaller the gap between when the program is checked versus when it is run, the less opportunity there is for malicious actors to tamper with the code.

Now consider the analogous picture for open source in the context of hardware, shown above. If it looks complicated, that’s because it is: there are a lot of hands that touch your hardware before it gets to you!

Git can ensure that the original design files haven’t been tampered with, and openness can help ensure that a “best effort” has been made to build and test a device that is trustworthy. However, there are still numerous actors in the supply chain that can tamper with the hardware, and there is no “hardware hash function” that enables us to draw an equivalence between the intent of the developer, and the exact instance of hardware in any user’s possession. The best we can do to check a modern silicon chip is to destructively digest and delayer it for inspection in a SEM, or employ a building-sized microscope to perform ptychographic imaging.

It’s like the Heisenberg Uncertainty Principle, but for hardware: you can’t simultaneously be sure of a computer’s construction without disturbing its function. In other words, for hardware the time of check is decoupled from the time of use, creating opportunities for tampering by malicious actors.

Of course, we entirely rely upon hardware to faithfully compute the hashes and signatures necessary for the perfect trust transfer of trust in software. Tamper with the hardware, and all of a sudden all these clever maths are for naught: a malicious piece of hardware could forge the results of a hash computation, thus allowing bad code to appear identical to good code.

Three Principles for Building Trustable Hardware

So where does this leave us? Do we throw up our hands in despair? Is there any solution to the hardware verification problem?

I’ve pondered this problem for many years, and distilled my thoughts into three core principles:

1. Complexity is the enemy of verification. Without tools like hashes, Merkel trees and digital signatures to transfer trust between developers and users, we are left in a situation where we are reduced to relying on our own two eyes to assess the correct construction of our hardware. Using tools and apps to automate verification merely shifts the trust problem, as one can only trust the result of a verification tool if the tool itself can be verified. Thus, there is an exponential spiral in the cost and difficulty to verify a piece of hardware the further we drift from relying on our innate human senses. Ideally, the hardware is either trivially verifiable by a non-technical user, or with the technical help of a “trustable” acquaintance, e.g. someone within two degrees of separation in the social network.

2. Verify entire systems, not just components. Verifying the CPU does little good when the keyboard and display contain backdoors. Thus, our perimeter of verification must extend from the point of user interface all the way down to the silicon that carries out the secret computations. While open source secure chip efforts such as Keystone and OpenTitan are laudable and valuable elements of a trustable hardware ecosystem, they are ultimately insufficient by themselves for protecting a user’s private matters.

3. Empower end-users to verify and seal their hardware. Delegating verification and key generation to a central authority leaves users exposed to a wide range of supply chain attacks. Therefore, end users require sufficient documentation to verify that their hardware is correctly constructed. Once verified and provisioned with keys, the hardware also needs to be sealed, so that users do not need to conduct an exhaustive re-verification every time the device happens to leave their immediate person. In general, the better the seal, the longer the device may be left unattended without risk of secret material being physically extracted.

Unfortunately, the first and second principles conspire against everything we have come to expect of electronics and computers today. Since their inception, computer makers have been in an arms race to pack more features and more complexity into ever smaller packages. As a result, it is practically impossible to verify modern hardware, whether open or closed source. Instead, if trustworthiness is the top priority, one must pick a limited set of functions, and design the minimum viable verifiable product around that.

The Simplicity of Betrusted

In order to ground the conversation in something concrete, we (Sean ‘xobs’ Cross, Tom Mable, and I) have started a project called “Betrusted” that aims to translate these principles into a practically verifiable, and thus trustable, device. In line with the first principle, we simplify the device by limiting its function to secure text and voice chat, second-factor authentication, and the storage of digital currency.

This means Betrusted can’t browse the web; it has no “app store”; it won’t hail rides for you; and it can’t help you navigate a city. However, it will be able to keep your private conversations private, give you a solid second factor for authentication, and perhaps provide a safe spot to store digital currency.

In line with the second principle, we have curated a set of peripherals for Betrusted that extend the perimeter of trust to the user’s eyes and fingertips. This sets Betrusted apart from open source chip-only secure enclave projects.

Verifiable I/O

For example, the input surface for Betrusted is a physical keyboard. Physical keyboards have the benefit of being made of nothing but switches and wires, and are thus trivial to verify.

Betrusted’s keyboard is designed to be pulled out and inspected by simply holding it up to a light, and we support different languages by allowing users to change out the keyboard membrane.

The output surface for Betrusted is a black and white LCD with a high pixel density of 200ppi, approaching the performance of ePaper or print media, and is likely sufficient for most text chat, authentication, and banking applications. This display’s on-glass circuits are entirely constructed of transistors large enough to be 100% inspected using a bright light and a USB microscope. Below is an example of what one region of the display looks like through such a microscope at 50x magnification.

The meta-point about the simplicity of this display’s construction is that there are few places to hide effective back doors. This display is more trustable not just because we can observe every transistor; more importantly, we probably don’t have to, as there just aren’t enough transistors available to mount an attack.

Contrast this to the more sophisticated color displays, which rely on a fleck of silicon with millions of transistors implementing a frame buffer and command interface, and this controller chip is closed-source. Even if such a chip were open, verification would require a destructive method involving delayering and a SEM. Thus, the inspectability and simplicity of the LCD used in Betrusted is fairly unique in the world of displays.

Verifiable CPU

The CPU is, of course, the most problematic piece. I’ve put some thought into methods for the non-destructive inspection of chips. While it may be possible, I estimate it would cost tens of millions of dollars and a couple years to execute a proof of concept system. Unfortunately, funding such an effort would entail chasing venture capital, which would probably lead to a solution that’s closed-source. While this may be an opportunity to get rich selling services and licensing patented technology to governments and corporations, I am concerned that it may not effectively empower everyday people.

The TL;DR is that the near-term compromise solution is to use an FPGA. We rely on logic placement randomization to mitigate the threat of fixed silicon backdoors, and we rely on bitstream introspection to facilitate trust transfer from designers to user. If you don’t care about the technical details, skip to the next section.

The FPGA we plan to use for Betrusted’s CPU is the Spartan-7 FPGA from Xilinx’s “7-Series”, because its -1L model bests the Lattice ECP5 FPGA by a factor of 2-4x in power consumption. This is the difference between an “all-day” battery life for the Betrusted device, versus a “dead by noon” scenario. The downside of this approach is that the Spartan-7 FPGA is a closed source piece of silicon that currently relies on a proprietary compiler. However, there have been some compelling developments that help mitigate the threat of malicious implants or modifications within the silicon or FPGA toolchain. These are:

• The Symbiflow project is developing a F/OSS toolchain for 7-Series FPGA development, which may eventually eliminate any dependence upon opaque vendor toolchains to compile code for the devices.
Prjxray is documenting the bitstream format for 7-Series FPGAs. The results of this work-in-progress indicate that even if we can’t understand exactly what every bit does, we can at least detect novel features being activated. That is, the activation of a previously undisclosed back door or feature of the FPGA would not go unnoticed.
• The placement of logic with an FPGA can be trivially randomized by incorporating a random seed in the source code. This means it is not practically useful for an adversary to backdoor a few logic cells within an FPGA. A broadly effective silicon-level attack on an FPGA would lead to gross size changes in the silicon die that can be readily quantified non-destructively through X-rays. The efficacy of this mitigation is analogous to ASLR: it’s not bulletproof, but it’s cheap to execute with a significant payout in complicating potential attacks.

The ability to inspect compiled bitstreams in particular brings the CPU problem back to a software-like situation, where we can effectively transfer elements of trust from designers to the hardware level using mathematical tools. Thus, while detailed verification of an FPGA’s construction at the transistor-level is impractical (but still probably easier than a general-purpose CPU due to its regular structure), the combination of the FPGA’s non-determinism in logic and routing placement, new tools that will enable bitstream inspection, and the prospect of 100% F/OSS solutions to compile designs significantly raises the bar for trust transfer and verification of an FPGA-based CPU.


Above: a highlighted signal within an FPGA design tool, illustrating the notion that design intent can be correlated to hardware blocks within an FPGA.

One may argue that in fact, FPGAs may be the gold standard for verifiable and trustworthy hardware until a viable non-destructive method is developed for the verification of custom silicon. After all, even if the mask-level design for a chip is open sourced, how is one to divine that the chip in their possession faithfully implements every design feature?

The system described so far touches upon the first principle of simplicity, and the second principle of UI-to-silicon verification. It turns out that the 7-Series FPGA may also be able to meet the third principle, user-sealing of devices after inspection and acceptance.

Sealing Secrets within Betrusted

Transparency is great for verification, but users also need to be able to seal the hardware to protect their secrets. In an ideal work flow, users would:

1. Receive a Betrusted device

2. Confirm its correct construction through a combination of visual inspection and FPGA bitstream randomization and introspection, and

3. Provision their Betrusted device with secret keys and seal it.

Ideally, the keys are generated entirely within the Betrusted device itself, and once sealed it should be “difficult” for an adversary with direct physical possession of the device to extract or tamper with these keys.

We believe key generation and self-sealing should be achievable with a 7-series Xilinx device. This is made possible in part by leveraging the bitstream encryption features built into the FPGA hardware by Xilinx. At the time of writing, we are fairly close to understanding enough of the encryption formats and fuse burning mechanisms to provide a fully self-hosted, F/OSS solution for key generation and sealing.

As for how good the seal is, the answer is a bit technical. The TL;DR is that it should not be possible for someone to borrow a Betrusted device for a few hours and extract the keys, and any attempt to do so should leave the hardware permanently altered in obvious ways. The more nuanced answer is that the 7-series devices from Xilinx are quite popular, and have received extensive scrutiny over its lifetime by the broader security community. The best known attacks against the 256-bit CBC AES + SHA-256 HMAC used in these devices leverages hardware side channels to leak information between AES rounds. This attack requires unfettered access to the hardware and about 24 hours to collect data from 1.6 million chosen ciphertexts. While improvement is desirable, keep in mind that a decap-and-image operation to extract keys via physical inspection using a FIB takes around the same amount of time to execute. In other words, the absolute limit on how much one can protect secrets within hardware is probably driven more by physical tamper resistance measures than strictly cryptographic measures.

Furthermore, now that the principle of the side-channel attack has been disclosed, we can apply simple mitigations to frustrate this attack, such as gluing shut or removing the external configuration and debug interfaces necessary to present chosen ciphertexts to the FPGA. Users can also opt to use volatile SRAM-based encryption keys, which are immediately lost upon interruption of battery power, making attempts to remove the FPGA or modify the circuit board significantly riskier. This of course comes at the expense of accidental loss of the key should backup power be interrupted.

At the very least, with a 7-series device, a user will be well-aware that their device has been physically compromised, which is a good start; and in a limiting sense, all you can ever hope for from a tamper-protection standpoint.

You can learn more about the Betrusted project at our github page, https://betrusted.io. We think of Betrusted as more of a “hardware/software distro”, rather than as a product per se. We expect that it will be forked to fit the various specific needs and user scenarios of our diverse digital ecosystem. Whether or not we make completed Betrusted reference devices for sale will depend upon the feedback of the community; we’ve received widely varying opinions on the real demand for a device like this.

Trusting Betrusted vs Using Betrusted

I personally regard Betrusted as more of an evolution toward — rather than an end to — the quest for verifiable, trustworthy hardware. I’ve struggled for years to distill the reasons why openness is insufficient to solve trust problems in hardware into a succinct set of principles. I’m also sure these principles will continue to evolve as we develop a better and more sophisticated understanding of the use cases, their threat models, and the tools available to address them.

My personal motivation for Betrusted was to have private conversations with my non-technical friends. So, another huge hurdle in all of this will of course be user acceptance: would you ever care enough to take the time to verify your hardware? Verifying hardware takes effort, iPhones are just so convenient, Apple has a pretty compelling privacy pitch…and “anyways, good people like me have nothing to hide…right?” Perhaps our quixotic attempt to build a truly verifiable, trustworthy communications device may be received by everyday users as nothing more than a quirky curio.

Even so, I hope that by at least starting the conversation about the problem and spelling it out in concrete terms, we’re laying the framework for others to move the goal posts toward a safer, more private, and more trustworthy digital future.

The Betrusted team would like to extend a special thanks to the NLnet foundation for sponsoring our efforts.

Read the whole story
brennen
16 days ago
reply
Boulder, CO
acdha
23 days ago
reply
Washington, DC
Share this story
Delete

X11 screen locking: a secure and modular approach

1 Share

For years I’ve been using XScreenSaver as a default, but I recently learned about xsecurelock and re-evaluated my screen-saving requirements:

  • The screen saver should turn on and lock on after some configurable time.
  • The screen saver should lock on a hotkey.
  • The screen saver should lock before suspend.
  • The screen saver should authenticate via PAM.
  • The screen saver should be configurable via xset s.
  • The screen saver should tell to forget SSH agent keys on locking.
  • I can disable the screen saver, e.g. when giving a presentation.
  • The screen saver should display a pretty demo.

    When I just used XScreenSaver, I had these issues:

  • Locking before suspend was not so easy, I had hacks using either xauth as root or additional helper scripts trapping signals.

  • Forgetting the SSH agent keys required a small, but additional script.
  • Rarely, XScreenSaver got stuck, so I had to kill it from a TTY to get in.
  • My xlbiff managed to pop up over XScreenSaver.

    After some unsuccessful fiddling with xss-lock and xautolock, I settled down on this toolset now:

  • xsecurelock for screen locking and spawning XScreenSaver demos

  • xidle for spawning xsecurelock after timeout
  • xbindkeys for triggering xidle on hotkey
  • acpid for triggering xidle on lid close

    Note that none of this requires systemd, DBus, or really anything else that X11 didn’t have for at least 25 years.

So, how to put it together:

I use a script run-xsecurelock, which is spawned from .xinitrc:

#!/bin/sh
# run-xsecurelock - run xsecurelock(1) with right config

if [ "$1" = lock ]; then
    ssh-add -D
    XSECURELOCK_BLANK_TIMEOUT=900 \
    XSECURELOCK_PASSWORD_PROMPT=time_hex \
    XSECURELOCK_SAVER=saver_xscreensaver \
    XSECURELOCK_SAVER_RESET_ON_AUTH_CLOSE=1 \
    XSECURELOCK_SHOW_DATETIME=1 \
        exec xsecurelock
fi

xidle -no -program "$HOME/bin/run-xsecurelock lock" -timeout 600

This runs xidle with a timeout of 10 minutes and tells it to spawn this script again with the lock argment, so it will run xsecurelock after forgetting the SSH agent keys. xsecurelock then spawns a random XScreenSaver demo. There is no support for cycling the demo, but you can trigger the authentication dialog and close it to get a different demo.

Then, we can set up the hotkey in ~/.xbindkeysrc:

"xset s activate"
  Insert

I used to use the Pause key for triggering the screen saver, but my T480 doesn’t have it anymore, so I use Insert now, which I don’t use else.

Finally, acpid can trigger xidle by sending SIGUSR1 from handler.sh:

pkill -USR1 xidle

Note how simple this is as root and doesn’t require getting X11 credentials or complex IPC.

To disable xidle, I just run xset s off. The timeout is configurable at run time using xset s SECONDS.

This should satisfy all my requirements and is a cleaner setup then I had before.

NP: Leonard Cohen—Thanks for the Dance

Read the whole story
brennen
17 days ago
reply
Boulder, CO
Share this story
Delete
Next Page of Stories