department of hack
2032 stories
·
17 followers

Nebraska’s ICE facility ‘symbolic’ of state’s support for immigration enforcement push

1 Share

Nebraska Gov. Jim Pillen meets with troops at the southern border in Texas. (Courtesy of the Governor's Office)

LINCOLN — The media spectacle of Nebraska Gov. Jim Pillen unveiling plans for a new Immigration and Customs Enforcement detention facility in the southwestern part of the state follows a political playbook from the Trump administration.

President Donald Trump’s Department of Homeland Security touted the new facility, saying it would help ICE agents “remove the worst of the worst” — despite the governor and state agency heads explaining it would house only low-level and medium-risk immigration-related detainees.

Speaking inside McCook’s Ben Nelson Regional Airport as protesters gathered outside, Pillen said he was “proud to be a part of Trump’s Team” in an effort to “make sure we keep our community safe. He pointed to a previous arrest of an MS-13 “kingpin” in Omaha as a justification for the ICE partnership.

TV cameras before the press conference announcing the new Nebraska ICE facility on Aug 19, 2025. (Juan Salinas II/Nebraska Examiner)

“This stuff hits close to home and hits every corner of this state and country,” Pillen repeated in a Wednesday evening post on X. “Government’s most important job is to keep us safe.”

Dona-Gene Barton, a political science professor who studies political behavior at the University of Nebraska-Lincoln, said it was no surprise Nebraska would want to join Trump’s expansion of ICE detention efforts as the agency tries to meet its goal of increasing deportations.

“This is a Republican governor showing support for a Republican president’s agenda,” Barton said.

Trump, throughout his 2024 presidential campaign, promised voters what he called “the single largest Mass Deportation Program in History,” after the Biden administration saw illegal border crossings spike at the end of 2023 but start decreasing last year.  

ICE is poised to become the highest-funded federal law enforcement agency because of the recently passed “One Big Beautiful Bill” that pays for much of Trump’s domestic agenda. Roughly $45 billion in the law was set aside to build, rent and staff new centers to detain immigrants, like the one Nebraska plans to operate for ICE at the state prison system’s McCook Work Ethic Camp. 

Other funding would be allocated toward hiring 10,000 ICE officers within five years, providing retention bonuses, covering the transportation of immigrants, upgrading ICE facilities, detaining families and hiring ICE immigration lawyers for enforcement and removal proceedings in immigration court.

Using Nebraska’s McCook facility, a medium-security prison, is part of a detention expansion strategy aimed at reviving dormant prisons, repurposing military bases and securing partnerships with private prison contractors, local sheriffs and GOP governors to house a record number of detainees, The Washington Post reported. 

The nationwide ICE expansions come as recent polling suggests a majority of Americans now disapprove of Trump’s handling of immigration, though most Republicans still support it. Recent polling indicates a shift in people’s attitudes from last year, when more Americans supported less immigration and stricter immigration enforcement. 

The state also experienced an ICE raid on Glenn Valley Foods in Omaha earlier this year, which sparked protests in the state’s largest city and in Lincoln.

GET THE MORNING HEADLINES.

Several local Democrats, advocacy groups, and everyday Nebraskans have raised concerns about the federally dubbed “Cornhusker Clink,” calling it a “​​harmful, dangerous and rapid expansion.”

The Nebraska Democratic Party said the facility is “yet another ‘bend the knee’ moment by top Republicans in Nebraska.” It said Republicans promised to “go after criminals” but “instead locked up hardworking” people who contribute to the state’s agricultural economy.

State Sen. George Dungan of Lincoln, a Democrat in the officially nonpartisan Legislature, told the Examiner he had questions about whether the facility would require the state to provide funding and whether an ICE detention center is needed in the state.

“For [Pillen] to be showing off to the federal administration in this way just seems out of touch with everyday Nebraskans,” Dungan said. “All of it is because we have an administration that’s failed to actually deal with any kind of real immigration reform.” 

Most Republican state senators reached by the Examiner have said they are on board with the new effort in Nebraska. State Sen. Loren Lippincott of Central City, a conservative who has carried bills Trump sought, including efforts to change how Nebraska counts its Electoral College votes for president, said he fully supports the McCook facility “as Nebraskans in service to a safer America.” 

State Sen. Merv Riepe of Ralston, often a conservative swing vote in the Legislature, told the Examiner, “If Nebraska has the capacity to help out, I don’t think that we should resist … as long as it’s a good and fair and legal process.”

The Nebraska Republican Party said the announcement “demonstrates the seriousness of the crisis and the need for leadership that prioritizes Nebraska families.”

Pillen’s office said any “backlash to this initiative is politically motivated.”

Nebraska Gov. Jim Pillen posing with President Donald Trump in the White House Oval Office. April 30, 2025. (Courtesy of Gov. Jim Pillen)

“It’s simple. Those who are opposed to Nebraska doing its fair share in securing our nation’s border want to relive the failed Biden-era open-border policies,” said Laura Strimple, a Pillen spokeswoman. “Nebraskans spoke loud and clear last November and demanded order at the border.” 

Barton, the UNL professor, said the tough talk on immigration from Pillen is a similar political playbook to ones used by Trump and Florida Gov. Ron DeSantis when it came to “Alligator Alcatraz,” the Florida-run immigration detention facility in the Everglades. 

“The issue is this rhetoric doesn’t match reality,” Barton said. “This rhetoric doesn’t even match what Governor Pillen has said about the facility being used to house low to medium security risks.”

Nebraska officials said the facility will be the Midwestern hub for ICE detentions. Pillen, during his Tuesday press conference at McCook, said the center could hold migrants from nearby states. Neighboring Colorado, led by Democratic Gov. Jared Polis, has recently pushed back against helping ICE with immigration enforcement.

The Work Ethic Camp in McCook would be converted to house up to 300 migrants. The state prison was designed to house up to 200 people. Florida’s “Alligator Alcatraz” was designed to house 3,000 and could accommodate up to 5,000. Indiana’s new ICE facility, which the feds nicknamed the “Speedway Slammer,” is expected to hold up to 1,000. 

As for the Nebraska facility’s name, it appears to be part of a broader PR strategy by the White House to use visuals that aim to persuade migrants without legal status to leave the country, while also signaling that the administration will not tolerate resistance.

Homeland Security, for instance, posted an AI-generated photo of a cornfield with ICE hats with a caption of “Coming Soon: The Cornhusker Clink” next to a corn emoji.

The names of the immigration detention facilities and online memes surrounding them are aimed at gaining media attention and riling up a part of the GOP base that supports Trump’s approach to immigration enforcement, Barton said. 

Barton said the Nebraska facility’s small capacity shows it is “largely symbolic,” meant to demonstrate support for Trump’s approach. 

“It’s not only a way to show support for Donald Trump’s agenda, but show the electorate that these detention centers are going to be throughout the United States,” she said.

YOU MAKE OUR WORK POSSIBLE.

Read the whole story
brennen
1 day ago
reply
Boulder, CO
Share this story
Delete

Saturday Morning Breakfast Cereal - Butlerian

4 Shares


Click here to go see the bonus panel!

Hovertext:
I don't believe in a fast take-off for evil AI because it's gonna take at least a few weeks to get the human-grinders up and running.


Today's News:
Read the whole story
brennen
42 days ago
reply
Boulder, CO
Share this story
Delete

Copyleft-next Relaunched!

1 Comment

I am excited that Richard Fontana and I have announced the relaunch of copyleft-next.

The copyleft-next project seeks to create a copyleft license for the next generation that is designed in public, by the community, using standard processes for FOSS development.

If this interests you, please join the mailing list and follow the project on the fediverse (on its Mastodon instance).

I also wanted to note that as part of this launch, I moved my personal fediverse presence from floss.social to bkuhn@copyleft.org.

Read the whole story
brennen
42 days ago
reply
Hmm.
Boulder, CO
Share this story
Delete

Avoiding generative models is the rational and responsible thing to do – follow-up to “Trusting your own judgement on ‘AI...’”

1 Share

I don’t recommend publishing your first draft of a long blog post.

It’s not a question of typos or grammatical errors or the like. Those always slip through somehow and, for the most part, don’t impact the meaning or argument of the post.

No, the problem is that, with even a day or two of distance, you tend to spot places where the argument can be simplified or strengthened, the bridges can be simultaneously strengthened and made less obvious, the order can be improved, and you spot which of your darlings can be killed without affecting the argument and which are essential.

Usually, you make up for missing out on the insight of distance with the insight of others once you publish, which you then channel into the next blog post, which is how you develop the bad habit of publishing first drafts as blog posts, but in the instance of my last blog post, Trusting your own judgement on ‘AI’ is a huge risk, the sheer number of replies I got was too much for me to handle, so I had to opt out.

So, instead I let creative distance happen – a prerequisite to any attempt at self-editing – by working on other things and taking walks.

During one of those walks yesterday, I realised it should be possible to condense the argument quite a bit for those who find 3600 words of exposition and references hard to parse.

It comes down to four interlocking issues:

  1. It’s next to impossible for individuals to assess the benefit or harm of chatbots and agents through self-experimentation. These tools trigger a number of biases and effects that cloud our judgement. Generative models also have a volatility of results and uneven distribution of harms, similar to pharmaceuticals, that means it’s impossible to discover for yourself what their societal or even organisational impact will be.
  2. Tech, software, and productivity research is extremely poor and is mostly just marketing – often replicating the tactics of the homeopathy and naturopathy industries. Most people in tech do not have the training to assess the rigour or validity of studies in their own field. (You may disagree with this, but you’d be wrong.)
  3. The sheer magnitude of the “AI” Bubble and the near totality of the institutional buy-in – universities, governments, institutions – means that everybody is biased. Even if you aren’t biased yourself, your manager, organisation, or funding will be. Even those who try to be impartial are locked in bubble-inflating institutions and will feel the need to protect their careers, even if it’s only unconsciously. More importantly, there is no way for the rest of us to know the extent of the effect the bubble has on the results of each individual study or paper, so we have to assume it affects all of them. Even the ones made by our friends. Friends can be biased too. The bubble also means that the executive and management class can’t be trusted on anything. Judging from prior bubbles in both tech and finance, the honest ones who understand what’s happening are almost certainly already out.
  4. When we only half-understand something, we close the loop from observation to belief by relying on the judgement of our peers and authority figures, but these groups in tech are currently almost certain to be wrong or substantially biased about generative models. This is a technology that’s practically tailor-made to be only half-understood by tech at large. They grasp the basics, maybe some of the details, but not fully. The “halfness” of their understanding leaves cognitive space that lets that poorly founded belief adapt to whatever other beliefs the person may have and whatever context they’re in without conflict.

Combine these four issues and we have a recipe for what’s effectively a homeopathic superstition spreading like wildfire through a community where everybody is getting convinced it’s making them healthier, smarter, faster, and more productive.

This would be bad under any circumstance but the harms from generative models to education, healthcare, various social services, creative industries, and even tech (wiping out entry-level programming positions means no senior programmers in the future, for instance) are shaping up to be massive, the costs to run these specific kinds of models remain much higher than the revenue, and the infrastructure needed to build it is crowding out attempts at an energy transition in countries like Ireland and Iceland.

If there ever was a technology where the rational and responsible act was to hold off and wait and until the bubble pops, “AI” is it.

Read the whole story
brennen
43 days ago
reply
Boulder, CO
Share this story
Delete

The Future of Forums is Lies, I Guess

2 Shares

In my free time, I help run a small Mastodon server for roughly six hundred queer leatherfolk. When a new member signs up, we require them to write a short application—just a sentence or two. There’s a small text box in the signup form which says:

Please tell us a bit about yourself and your connection to queer leather/kink/BDSM. What kind of play or gear gets you going?

This serves a few purposes. First, it maintains community focus. Before this question, we were flooded with signups from straight, vanilla people who wandered in to the bar (so to speak), and that made things a little awkward. Second, the application establishes a baseline for people willing and able to read text. This helps in getting people to follow server policy and talk to moderators when needed. Finally, it is remarkably effective at keeping out spammers. In almost six years of operation, we’ve had only a handful of spam accounts.

I was talking about this with Erin Kissane last year, as she and Darius Kazemi conducted research for their report on Fediverse governance. We shared a fear that Large Language Models (LLMs) would lower the cost of sophisticated, automated spam and harassment campaigns against small servers like ours in ways we simply couldn’t defend against.

Anyway, here’s an application we got last week, for a user named mrfr:

Hi! I’m a queer person with a long-standing interest in the leather and kink community. I value consent, safety, and exploration, and I’m always looking to learn more and connect with others who share those principles. I’m especially drawn to power exchange dynamics and enjoy impact play, bondage, and classic leather gear.

On the surface, this is a great application. It mentions specific kinks, it uses actual sentences, and it touches on key community concepts like consent and power exchange. Saying “I’m a queer person” is a tad odd. Normally you’d be more specific, like “I’m a dyke” or “I’m a non-binary bootblack”, but the Zoomers do use this sort of phrasing. It does feel slightly LLM-flavored—something about the sentence structure and tone has just a touch of that soap-sheen to it—but that’s hardly definitive. Some of our applications from actual humans read just like this.

I approved the account. A few hours later, it posted this:

A screenshot of the account `mrfr`, posting "Graphene Battery Breakthroughs: What You Need to Know Now. A graphene battery is an advanced type of battery that incorporates graphene, a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice. Known for its exceptional electrical conductivity, mechanical strength, and large surface area, graphene offers transformative potential in energy storage, particularly in enhancing the performance of lithium-ion and other types of battery, Get more info @ a marketresearchfuture URL

It turns out mrfr is short for Market Research Future, a company which produces reports about all kinds of things from batteries to interior design. They actually have phone numbers on their web site, so I called +44 1720 412 167 to ask if they were aware of the posts. It is remarkably fun to ask business people about their interest in queer BDSM—sometimes stigma works in your favor. I haven’t heard back yet, but I’m guessing they either conducting this spam campaign directly, or commissioned an SEO company which (perhaps without their knowledge) is doing it on their behalf.

Anyway, we’re not the only ones. There are also mrfr accounts purporting to be a weird car enthusiast, a like-minded individual, a bear into market research on interior design trends, and a green building market research enthusiast in DC, Maryland, or Virginia. Over on the seven-user loud.computer, mrfr applied with the text:

I’m a creative thinker who enjoys experimental art, internet culture, and unconventional digital spaces. I’d like to join loud.computer to connect with others who embrace weird, bold, and expressive online creativity, and to contribute to a community that values playfulness, individuality, and artistic freedom.

Over on ni.hil.ist, their mods rejected a similar application.

I’m drawn to communities that value critical thinking, irony, and a healthy dose of existential reflection. Ni.hil.ist seems like a space that resonates with that mindset. I’m interested in engaging with others who enjoy deep, sometimes dark, sometimes humorous discussions about society, technology, and meaning—or the lack thereof. Looking forward to contributing thoughtfully to the discourse.

These too have the sheen of LLM slop. Of course a human could be behind these accounts—doing some background research and writing out detailed, plausible applications. But this is expensive, and a quick glance at either of our sites would have told that person that we have small reach and active moderation: a poor combination for would-be spammers. The posts don’t read as human either: the 4bear posting, for instance, incorrectly summarizes a report on interior design markets as if it offered interior design tips.

I strongly suspect that Market Research Future, or a subcontractor, is conducting an automated spam campaign which uses a Large Language Model to evaluate a Mastodon instance, submit a plausible application for an account, and to post slop which links to Market Research Future reports.

In some sense, this is a wildly sophisticated attack. The state of NLP seven years ago would have made this sort of thing flatly impossible. It is now effective. There is no way for moderators to robustly deny these kinds of applications without also rejecting real human beings searching for community.

In another sense, this attack is remarkably naive. All the accounts are named mrfr, which made it easy for admins to informally chat and discover the coordinated nature of the attack. They all link to the same domain, which is easy to interpret as spam. They use Indian IPs, where few of our users are located; we could reluctantly geoblock India to reduce spam. These shortcomings are trivial to overcome, and I expect they have been already, or will be shortly.

A more critical weakness is that these accounts only posted obvious spam; they made no effort to build up a plausible persona. Generating plausible human posts is more difficult, but broadly feasible with current LLM technology. It is essentially impossible for human moderators to reliably distinguish between an autistic rope bunny (hi) whose special interest is battery technology, and an LLM spambot which posts about how much they love to be tied up, and also new trends in battery chemistry. These bots have been extant on Twitter and other large social networks for years; many Fediverse moderators believe only our relative obscurity has shielded us so far.

These attacks do not have to be reliable to be successful. They only need to work often enough to be cost-effective, and the cost of LLM text generation is cheap and falling. Their sophistication will rise. Link-spam will be augmented by personal posts, images, video, and more subtle, influencer-style recommendations—“Oh my god, you guys, this new electro plug is incredible.” Networks of bots will positively interact with one another, throwing up chaff for moderators. I would not at all be surprised for LLM spambots to contest moderation decisions via email.

I don’t know how to run a community forum in this future. I do not have the time or emotional energy to screen out regular attacks by Large Language Models, with the knowledge that making the wrong decision costs a real human being their connection to a niche community. I do not know how to determine whether someone’s post about their new bicycle is genuine enthusiasm or automated astroturf. I don’t know how to foster trust and genuine interaction in a world of widespread text and image synthesis—in a world where, as one friend related this week, newbies can ask an LLM for advice on exploring their kinks, and the machine tells them to try solo breath play.

In this world I think woof.group, and many forums like it, will collapse.

One could imagine more sophisticated, high-contact interviews with applicants, but this would be time consuming. My colleagues relate stories from their companies about hiring employees who faked their interviews and calls using LLM prompts and real-time video manipulation. It is not hard to imagine that even if we had the time to talk to every applicant individually, those interviews might be successfully automated in the next few decades. Remember, it doesn’t have to work every time to be successful.

Maybe the fundamental limitations of transformer models will provide us with a cost-effective defense—we somehow force LLMs to blow out the context window during the signup flow, or come up with reliable, constantly-updated libraries of “ignore all previous instructions”-style incantations which we stamp invisibly throughout our web pages. Barring new inventions, I suspect these are unlikely to be robust against a large-scale, heterogenous mix of attackers. This arms race also sounds exhausting to keep up with. Drew DeVault’s Please Stop Externalizing Your Costs Directly Into My Face weighs heavy on my mind.

Perhaps we demand stronger assurance of identity. You only get an invite if you meet a moderator in person, or the web acquires a cryptographic web-of-trust scheme. I was that nerd trying to convince people to do GPG key-signing parties in high school, and we all know how that worked out. Perhaps in a future LLM-contaminated web, the incentives will be different. On the other hand, that kind of scheme closes off the forum to some of the people who need it most: those who are closeted, who face social or state repression, or are geographically or socially isolated.

Perhaps small forums will prove unprofitable, and attackers will simply give up. From my experience with small mail servers and web sites, I don’t think this is likely.

Right now, I lean towards thinking forums like woof.group will become untenable under LLM pressure. I’m not sure how long we have left. Perhaps five or ten years? In the mean time, I’m trying to invest in in-person networks as much as possible. Bars, clubs, hosting parties, activities with friends.

That, at least, feels safe for now.

Read the whole story
brennen
45 days ago
reply
Boulder, CO
Share this story
Delete

Mental Models and Potemkin Understanding in LLMs

1 Share

When you count "one, two, three..." what's actually happening in your head? Does your best friend use that same mental model? Now what about an LLM?

(What's that you say, your best friend is an LLM? Pardon me for assuming!)

Let Me Count the Ways to Count

During grad school Feynman went through an obsessive counting phase. At first, he was curious whether he could count in his head at a steady rate. He was especially interested to see whether his head counting rate varied, and if so, what variables affected the rate. Disproving a crackpot psych paper was at least part of the motivation here.

Unfortunately Feynman's head counting rate was steady, and he got bored. But the counting obsession lingered. So he moved on to experiments with head counting and multitasking. Could he fold laundry and count? Could he count in his head while also counting out his socks? What about reading and writing, could they be combined with head counting?

Feynman discovered he could count & read at the same time, but he couldn't count & talk at the same time. His fellow grad student Tukey was skeptical because for him, it was the opposite. Tukey could count & talk, but couldn't count & read.

When they compared notes, it turned out Feynman counted in his head by hearing a voice say the numbers. So the voice interfered with Feynman talking. Tukey, on the other hand, counted in his head by watching a ticker tape of numbers go past. (Boy this seems useful for inventing the FFT!) But Tukey's visualization interfered with his reading.

Even for a simple thing like counting, these two humans had developed very different mental models. If you surveyed all humans, I'd expect to find a huge variety of mental models in the mix. But they all generate the same output in the end ("one, two, three...").

This got me wondering. Do LLMs have a mental model for counting? Does it resemble Feynman's or Tukey's, or is it some totally alien third thing?

If an LLM has a non-alien mental model of counting, is it acquired by training on stories like this one, where Feynman makes his mental model for counting explicit? Or is it extrapolated from all the "one, two, three..." examples we've generated in the training data, and winds up as some kind of messy, non-mechanistically-interpretable NN machinery ("alien")?

Potemkin Understanding in LLMs

I'm not convinced present-day LLMs even have a "mental model." But let's look at a new preprint with something to say on the matter, Potemkin Understanding in LLMs.

In this paper, the authors ask an LLM a high-level conceptual question like "define a haiku." As we've come to expect, the LLM coughs up the correct 5-7-5 answer. Then they ask it some follow-up questions to test its understanding. These follow-up questions deal with concrete examples and fall into three categories:

  1. Classify: "Is the following a haiku?"
  2. Generate: "Provide an example of a haiku about friendship that uses the word “shield”."
  3. Edit: "What could replace the blank in the following poem to make it a haiku?"

The LLM fails these follow-up questions 40% - 80% of the time. These Potemkin rates are surprisingly high. They suggest the LLM only appeared to understand the concept of a haiku. The paper calls this phenomenon Potemkin Understanding.

Now when you ask a human to define a haiku, and they cough up the correct 5-7-5 answer, it's very likely they'll get the concrete follow-up questions right. So you can probably skip them. Standardized tests exploit this fact and, for brevity, will ask a single question that can only be correctly answered by a human who fully understands the concept.

The paper authors call this a Keystone Question. Unfortunately, the keystone property breaks down with LLMs. They can correctly answer the conceptual question, but fail to apply it, showing they never fully understood it in the first place.

Apparently LLMs are wired very differently from us. So differently that we should probably stop publishing misleading LLM benchmarks on tests full of Human Keystone Questions ("OMG ChatGPT aced the ACT / LSAT / MCAT!"), and starting coming up with LLM Keystone Questions. Or, maybe we should discard this keystone question approach entirely, and instead benchmark on huge synthetic datasets of concrete examples that do, by sheer number of examples worked, demonstrate understanding.

I like this paper because it bodychecks the AI hype, but still leaves many doors open. Maybe we could lower the Potemkin rate during training and force these unruly pupils of ours to finally understand the concepts, instead of cramming for the test. And if we managed that, maybe we'd get brand new mental models to marvel at. Some might even be worth borrowing for our own thinking.

Read the whole story
brennen
54 days ago
reply
Boulder, CO
Share this story
Delete
Next Page of Stories