department of hack
1923 stories
·
16 followers

Fall respiratory season is around the corner

1 Share

It’s fall! Well, almost, but that hasn’t stopped me from getting out my fall decorations.

Unfortunately, with the season change comes more respiratory sickness—the weather changes so people head inside, viruses mutate, and social networks change (school starts, holiday celebrations occur). 

While Covid-19 continues to ravage, we’re getting signs that other respiratory viruses are starting early. YLE will bring you regular updates throughout the season because there are things you can do to minimize disruption to your life.

Let’s dive in.

ILI: Low, but starting to creep up in the South 

During the respiratory season, epidemiologists pay close attention to “influenza-like illness” (ILI). Healthcare providers tally the number of patients who present to their office with ILI symptoms—fever, cough, and/or sore throat. This encompasses multiple viruses (Covid-19 is usually separated if it’s tested for) but is a good indication of the overall climate of respiratory health.

Nationally, ILI is still very low, but it’s starting to creep up in the South, like in Georgia. This is normal—ILI usually begins in the South and then spreads outwards nationally. We don’t know why this pattern exists, but it is a consistent reminder that “sickness season” is just around the corner. It seems to be a few weeks early.

Source: CDC and Georgia Department of Health; Annotated by YLE

Looking closer at Georgia, we see that the uptick is mainly driven by rhinovirus/enteroviruses, which typically cause “common cold” symptoms.

Source: CDC; Annotated by YLE

School-aged children are driving this uptick, which should be no surprise as schools started this month. 

Source: Georgia Department of Public Health; Annotated by YLE

Covid-19: Very high but possibly peaking

Nationally, wastewater levels for Covid-19 are still very high. All states, except Michigan, have “high” or “very high” levels. Michigan had a sudden drop in wastewater levels this week, so I expect this to be due to unstable data instead of a reflection of “true” levels. Time will tell.

(Source: CDC; Annotated by YLE)

While the West has peaked (notably at the same levels as last winter; this was no small wave), the other regions are still rising.

Source: CDC; Annotated by YLE

Because other metrics, like emergency department visits and test positivity rates, have also peaked nationally, this is a solid sign that we will soon start riding the wave down.

Hospitalizations continue to rise. For the third week in a row, more than 1,000 Americans have died from Covid-19. 

(Source: CDC; Annotations by YLE)

Covid-19 continues to mutate (as viruses do.) An Omicron subvariant—called XEC—is on the horizon, but it’s still too early to tell if this will drive the next infection wave. Some are concerned, but Germany is the only country with enough reliable data, and XEC is not spreading quickly. I am sure another subvariant is cooking.

What can we expect going forward? 

Epidemiologists expect a “middle of the road” respiratory season, like last year. Two things are driving this educated guess:

  1. Southern Hemisphere. We in the Northern Hemisphere have an advantage—the Southern Hemisphere’s respiratory season occurs before ours, so we can look to their winter season to predict ours. ILI in Australia was about on track with the 5-year average. The flu vaccine was also a good match for this season, which is excellent news. 

Source: Australian Centre for Disease Control; Annotated by YLE
  1. Mathematical models using previous patterns, immunity, and waning data. Last week, CDC modelers predicted hospitalizations of all respiratory viruses will remain higher than “pre-pandemic” levels. They also published two likely scenarios for the upcoming months for Covid-19. They predict we will have a similar or less severe Covid-19 winter as last year.

Emerging diseases update

There isn’t necessarily anything for the general population to do for these diseases, but we’re keeping you up to speed.

  • H5N1 continues to spread. Over the weekend, H5N1 was detected in California herds for the first time. California is the number one milk producer in the United States, so the milk supply may be impacted if the spread continues. California also has an extensive raw milk consumption market, so we may see our first severe case from raw milk. All eyes are on the upcoming seasonal flu. If a farm worker gets infected with H5N1 while sick with seasonal flu, the viruses can easily swap genes to become more adaptable to human spread. This increases the risk of an influenza pandemic.

  • Mpox continues to spread in central Africa, and few resources are available to contain it. A few countries have pledged vaccines, but not enough to get this outbreak under control. Last week, CDC sent 72 epidemiologists to aid in the response. 

  • WHO started vaccinating for polio in Gaza after the first infant case was detected after 25 years. This shouldn’t be a surprise, as thousands of children are missing their routine vaccinations due to the war. The vaccine being used is an oral live vaccine, which is cheap, quick, and easy to administer. However, one big long-term disadvantage is that it will further the spread of polio (in feces from the live vaccine) because of the lack of clean water and functioning sewage. Nonetheless, 161,000 children have been vaccinated since September 1 in a herculean effort. 640,000 children— or 90% of children— must be vaccinated for the campaign to work against transmission.

Bottom line

Fall respiratory season is around the corner! Make a plan to get your vaccines. And best of luck to fellow parents out there, as we know this means sick kiddos on the horizon. 

Love, YLE 


“Your Local Epidemiologist (YLE)” is written by Dr. Katelyn Jetelina, MPH PhD—an epidemiologist, data scientist, wife, and mom of two little girls. During the day, she is a senior scientific consultant to a number of organizations, including the CDC. At night, she writes this newsletter. Her main goal is to “translate” the ever-evolving public health science so that people will be well-equipped to make evidence-based decisions. This newsletter is free, thanks to the generous support of fellow YLE community members. To support this effort, subscribe below:

Subscribe now

Read the whole story
brennen
1 day ago
reply
Boulder, CO
Share this story
Delete

CEOs of Albertsons and Kroger says shoppers would see lower prices after merger

1 Comment

By Claire Rush and Dee-Ann Durbin, The Associated Press

PORTLAND, Ore. — The chief executive officers of Kroger and Albertsons insisted Wednesday — under questioning from the federal government — that merging would allow the two supermarket companies to lower prices and more effectively compete with retail giants like Walmart, Costco and Amazon.

Kroger CEO Rodney McMullen and Albertsons CEO Vivek Sankaran appeared in Oregon’s U.S. District Court to testify against the Federal Trade Commission’s attempt to block the proposed merger of their companies. During the hearing, the commission’s lawyers suggested that the merger would hurt competition in certain areas where the two are each other’s primary rivals.

“The day that we merge is the day that we will begin lowering prices,” McMullen said while under questioning by a lawyer representing his company.

The two companies proposed what would be the largest supermarket merger in U.S. history in October 2022, after Kroger agreed to purchase Albertsons. But the Federal Trade Commission sued to prevent the $24.6 billion deal, alleging it would eliminate competition and lead to higher food prices for already struggling customers.

Addressing another issue that has worried shoppers in communities with both Albertsons and Kroger-run stores, McMullen said Kroger was committed to not closing any branches immediately if the merger is finalized but might down the road if it decides location changes or consolidations are needed.

Outside of a King Soopers
A King Soopers grocery store in Denver in January 2022. (Olivia Sun, The Colorado Sun)

Sankaran, Albertsons’ CEO, argued that the deal would boost growth and in turn bolster stores and union jobs, because many of its and Kroger’s competitors, like Walmart, have few unionized workers. But when asked what his company would do if the merger didn’t go through, he said it may pursue “structural options” like laying off employees, closing stores and exiting certain markets, if unable to find other ways to lower costs.

“I would have to consider that,” he said. “It’s a dramatically different picture with the merger than without it.”

An FTC lawyer pointed to a written statement that Sankaran provided to the U.S. Senate in 2022 when testifying about the merger, in which he said his company was “in excellent financial condition.” Sankaran said the market and certain conditions had changed since then.

The testimonies of both CEOs were expected to be critical components of the three-week hearing, which is at its midpoint. What the two say under oath about prices, potential store closures and the impact on workers will likely be scrutinized in the years ahead if the merger goes through.

Kroger, based in Cincinnati, Ohio, operates 2,800 stores in 35 states, including brands like Ralphs, Smith’s and Harris Teeter. Albertsons, based in Boise, Idaho, operates 2,273 stores in 34 states, including brands like Safeway, Jewel Osco and Shaw’s. Together, the companies employ around 710,000 people.

FTC attorneys have argued that in the 22 states where the two companies compete now, they closely match each other on price, quality, private label products and services like store pickup. Shoppers benefit from that competition and would lose out if the merger is allowed to proceed, they said.

According to Kroger and Albertsons company documents referred to by FTC lawyers on Wednesday, the two companies are primary rivals in multiple regions, from southern California to the Portland metropolitan area. A Kroger attorney countered by saying that Walmart remains Kroger’s largest competitor in a majority of markets around the country.

McMullen said that Albertsons’ prices are 10% to 12% higher than Kroger’s and that the merged company would try to reduce the disparity as part of a strategy for keeping customers. Walmart now controls around 22% of U.S. grocery sales. Combined, Kroger and Albertsons would control around 13%.

“We know that pricing is going to continue to go down,” McMullen said.

Kim Cordova, President UFCW 7, left, speaks about the Kroger and Albertsons merger during a news conference outside the federal courthouse before a hearing on the merger on Monday, Aug. 26, 2024, in Portland, Ore. Rickee Nelson, UFCW Local 7 member and grocery store worker, center, and Jessi Crowley, a pharmacist at Albertsons-owned Pavilions, right, listen. (AP Photo/Jenny Kane)

The two CEOs also spoke to the ways in which e-commerce has transformed the grocery industry, noting Amazon’s online shopping platforms and its purchase of Whole Foods.

“When Amazon enters something, they make a big change,” Sankaran said.

The FTC and labor union leaders also claim that workers’ wages and benefits would decline if Kroger and Albertsons no longer compete with each other. They have additionally expressed concern that potential store closures could create so-called food and pharmacy “deserts” for consumers.

“America needs more competition, more grocery stores, and more leverage for workers to secure better pay and staffing – not less,” the United Food and Commercial Workers International union’s Stop the Merger coalition said in a statement Wednesday.

McMullen said Wednesday that Kroger was committed to honoring existing labor contracts. The FTC’s chief trial counsel, Susan Musser, said the merger still might affect working conditions because union contracts are typically renegotiated every three years.

Under the proposed deal, Kroger and Albertsons would sell 579 stores in places where their locations overlap to C&S Wholesale Grocers, a New Hampshire-based supplier to independent supermarkets that also owns the Grand Union and Piggly Wiggly store brands.

The FTC alleges that C&S is ill-prepared to take on those stores. Laura Hall, the FTC’s senior trial counsel, has cited internal documents that indicated C&S executives were skeptical about the quality of the stores they would get and may want the option to sell or close them.

The federal courthouse is reflected in the rear window as Albertsons CEO Vivek Sankaran enters a vehicle and leaves after testifying in a federal court hearing on Wednesday, Sept. 4, 2024, in Portland, Ore. The Federal Trade Commission is seeking a preliminary injunction to block a merger of supermarket companies Albertsons and Kroger. (AP Photo/Jenny Kane)

C&S CEO Eric Winn, for his part, testified last week that he thinks his company can be successful in the venture.

The FTC is seeking an injunction to block the merger temporarily while its lawsuit against the deal goes before an administrative law judge. U.S. District Judge Adrienne Nelson was expected to hear from around 40 witnesses before deciding whether to grant the request.

If Nelson agrees to issue the injunction, the FTC plans to hold the in-house hearings starting Oct. 1. Kroger sued the FTC last month, however, alleging the agency’s internal proceedings are unconstitutional and saying it wants the merger’s merits decided in federal court.

The attorneys general of Arizona, California, the District of Columbia, Illinois, Maryland, Nevada, New Mexico, Oregon and Wyoming all joined the FTC’s lawsuit on the commission’s side. Washington and Colorado filed separate cases in state courts seeking to block the merger.

___

Durbin reported from Detroit.

Read the whole story
brennen
1 day ago
reply
fucking lol.
Boulder, CO
angelchrys
1 day ago
pretty sure you're not allowed to be a grocery CEO unless you can lie with a straight face
brennen
1 day ago
ain't that the truth. although "grocery" may be limiting the scope of that unnecessarily.
Share this story
Delete

2024-08-31 ipmi

2 Shares

I am making steady progress towards moving the Computers Are Bad enterprise cloud to its new home, here in New Mexico. One of the steps in this process is, of course, purchasing a new server... the current Big Iron is getting rather old (probably about a decade!) and here in town I'll have the rack space for more machines anyway.

In our modern, cloud-centric industry, it is rare that I find myself comparing the specifications of a Dell PowerEdge against an HP ProLiant. Because the non-hyperscale server market has increasingly consolidated around Intel specifications and reference designs, it is even rarer that there is much of a difference between the major options.

This brings back to mind one of those ancient questions that comes up among computer novices and becomes a writing prompt for technology bloggers. What is a server? Is it just, like, a big computer? Or is it actually special?

There's a lot of industrial history wrapped up in that question, and the answer is often very context-specific. But there are some generalizations we can make about the history of the server: client-server computing originated mostly as an evolution of time-sharing computing using multiple terminals connected to a single computer. There was no expectation that terminals had a similar architecture to computers (and indeed they were usually vastly simpler machines), and that attitude carried over to client-server systems. The PC revolution instilled a WinTel monoculture in much of client-side computing by the mid-'90s, but it remained common into the '00s for servers to run entirely different operating systems and architectures.

The SPARC and Solaris combination was very common for servers, as were IBM's minicomputer architectures and their numerous operating systems. Indeed, one of the key commercial contributions of Java was the way it allowed enterprise applications to be written for a Solaris/SPARC backend while enabling code reuse for clients that ran on either stalwarts like Unix/RISC or "modern" business computing environments like Windows/x86. This model was sometimes referred to as client-server computing with "thick clients." It preserved the differentiation between "server" and "client" as classes of machines, and the universal adherance of serious business software to this model lead to an association between server platforms and "enterprise computing."

Over time, things have changed, as they always do. Architectures that had been relegated to servers became increasingly niche and struggled to compete with the PC architecture on cost and performance. The general architecture of server software shifted away from vertical scaling and high-uptime systems to horizontal scaling with relaxed reliability requirements, taking away much of the advantage of enterprise-class computers. For the most part, today, a server is just a big computer. There are some distinguishing features: servers are far more likely to be SMP or NUMA, with multiple processor sockets. While the days of SAS and hardware RAID are increasingly behind us, servers continue to have more complex storage controllers and topologies than clients. And servers, almost by definition, offer some sort of out of band management.

Out-of-band management, sometimes also called lights-out management, identifies a capability that is almost unheard of in clients. A separate, smaller management computer allows for remote access to a server even when it is, say, powered off. The terms out-of-band and in-band in this context emerge from their customery uses in networking and telecom, meaning that out of band management is performed without the use of the standard (we might say "data plane") network connection to a machine. But in practice they have drifted in meaning, and it is probably better to think of out-of-band management as meaning that the operating system and general-purpose components are not required. This might be made clearer by comparison: a very standard example of in-band management would be SSH, a service provided by the software on a computer that allows you to interact with it. Out-of-band management, by contrast, is provided by a dedicated hardware and software stack and does not require the operating system or, traditionally, even the CPU to cooperate.

You can imagine that this is a useful capability. Today, out-of-band management is probably best exemplified by the remote console that most servers offer. It's basically an embedded IP KVM, allowing you to interact with the machine as if you were at a locally connected monitor and keyboard. A lot of OOB management products also offer "virtual media," where you can upload an ISO file to the management interface and then have it appear to the computer proper as if it were a physical device. This is extremely useful for installing operating systems.

OOB management is an interesting little corner of computer history. It's not a new idea at all; in fact, similar capabilities can be found through pretty much the entire history of business computing. If anything, it's gotten simpler and more boring over time. A few evenings ago I was watching a clabretro video about an IBM p5 he's gotten working. As is the case in most of his videos about servers, he has to give a brief explanation of the multiple layers of lower-level management systems present in the p5 and their various textmode and web interfaces.

If we constrain our discussion of "servers" to relatively modern machines, starting say in the late '80s or early '90s, there are some common features:

  • Some sort of local operator interface (this term itself being a very old one), like an LCD matrix display or grid of LED indicators, providing low-level information on hardware health.
  • A serial console with access to the early bootloader and a persistent low-level management system.
  • A higher-level management system, with a variable position in the stack depending on architecture, for remote management of the machine workload.

A lot of this stuff still hangs around today. Most servers can tell you on the front panel if a redundant component like a fan or power supply has failed, although the number of components that are redundant and can be replaced online has dwindled with time from "everything up to and including CPUs" on '90s prestige architectures to sometimes little more than fans. Serial management is still pretty common, mostly as a holdover of being a popular way to do OS installation and maintenance on headless machines [1].

But for the most part, OOB management has consolidated in the exact same way as processor architecture: onto Intel IPMI.

IPMI is confusing to some people for a couple of reasons. First, IPMI is a specification, not an implementation. Most major vendors have their own implementation of IPMI, often with features above and beyond the core IPMI spec, and they call them weird acronyms like HP iLO and Dell DRAC. These vendor-specific implementations often predate IPMI, too, so it's never quite right to say they are "just IPMI." They're independent systems with IPMI characteristics. On the other hand, more upstart manufacturers are more likely to just call it IPMI, in which case it may just be the standard offering from their firmware vendor.

Further confusing matters is a fair amount of terminological overlap. The IPMI software runs on a processor conventionally called the baseboard management controller or BMC, and the terms IPMI and BMC are sometimes used interchangeably. Lights-out management or LOM is mostly an obsolete term but sticks around because HP(E) is a fan of it and continues to call their IPMI implementation Integrated Lights-Out. The BMC should not be confused with the System Management Controller or SMC, which is one of a few terms used for a component present in client computers to handle tasks like fan speed control. These have an interrelated history and, indeed, the BMC handles those functions in most servers.

IPMI also specifies two interfaces: an out-of-band interface available over the network or a serial connection, and an in-band interface available to the operating system via a driver (and, in practice, I believe communication between the CPU and the baseboard management controller via the low-pin-count or LPC bus, which is a weird little holdover of ISA present in most modern computers). The result is that you can interact with the IPMI from a tool running in the operating system, like ipmitool on Linux. That makes it a little confusing what exactly is going on, if you don't understand that the IPMI is a completely independent system that has a local interface to the running operating system for convenience.

What does the IPMI actually do? Well, like most things, it's mostly become a webapp. Web interfaces are just too convenient to turn down, so while a lot of IPMI products do have dedicated client software, they're porting all the features into an embedded web application. The quality of these web interfaces varies widely but is mostly not very good. That raises a question, of course, of how you get to the IPMI web interface.

Most servers on the market have a dedicated ethernet interface for the IPMI, often labelled "IPMI" or "management" or something like that. Most people would agree that the best way to use IPMI is to put the management network interface onto a dedicated physical network, for reasons of both security and reliability (IPMI should remain accessible even in case of performance or reliability problems with your main network). A dedicated physical network costs time, space, and money, though, so there are compromises. For one, your "management network" is very likely to be a VLAN on your normal network equipment. That's sort of like what AT&T calls a common-carrier switching arrangement, meaning that it behaves like an independent, private network but shares all of the actual equipment with everything else, the isolation being implemented in software. That was a weird comparison to make and I probably just need to write a whole article on CCSAs like I've been meaning to.

Even that approach requires extra cabling, though, so IPMI offers "sideband" networking. With sideband management, the BMC communicates directly with the same NIC that the operating system uses. The implementation is a little bit weird: the NIC will pretend to be two different interfaces, mixing IPMI traffic into the same packet stream as host traffic but using a different MAC address. This way, it appears to other network equipment as if there are two different network interfaces in use, as usual. I will leave judgment as to how good of an idea this is to you, but there are obvious security considerations around reducing the segregation between IPMI and application traffic.

And yes, it should be said, a lot of IPMI implementations have proven to be security nightmares. They should never be accessible to any untrusted person.

Details of network features vary between IPMI implementations, but there is a standard interface on UDP 623 that can be used for discovery and basic commands. There's often SSH and a web interface, and VNC is pretty common for remote console.

There are some neat basic functions you can perform with the IPMI, either over the network or locally using an in-band IPMI client. A useful one, if you are forgetful and keep poor records like I do, is listing the hardware modules making up the machine at an FRU or vendor part number level. You can also interact with basic hardware functions like sensors, power state, fans, etc. IPMI offers a standard watchdog timer, which can be combined with software running on the operating system to ensure that the server will be reset if the application gets into an unhealthy state. You should set a long enough timeout to allow the system to boot and for you to connect and disable the watchdog timer, ask me how I know.

One of the reasons I thought to write about IPMI is its strange relationship to the world of everyday client computers. IPMI is very common in enterprise servers but very rare elsewhere, much to the consternation of people like me that don't have the space or noise tolerance for a 1U pizzabox in their homes. If you are trying to stick to compact or low-power computers, you'll pretty much have to go without.

But then, there's kind of a weird exception. What about Intel ME and AMD ST? These are essentially OOB management controllers that are present in virtually all Intel and AMD processors. This is kind of an odd story. Intel ME, the Management Engine, is an enabling component of Intel Active Management Technology (Intel AMT). AMT was pretty much an attempt at popularizing OOB management for client machines, and offers most of the same capabilities as IPMI. It has been considerably less successful. Most of that is probably due to pricing, Intel has limited almost all AMT features to use with their very costly enterprise management platforms. Perhaps there is some industry in which these sell well, but I am apparently not in it. There are open-source AMT clients, but the next problem you will run into is finding a machine where AMT is actually usable.

The fact that Intel AMT has sideband management capability, and that therefore the Intel ME component on which AMT runs has sideband management capability, was the topic of quite some consternation in the security community. Here is a mitigating factor: sideband management is only possible if the processor, motherboard chipset, and NIC are all AMT-capable. Options for all three devices are limited to Intel products with the vPro badge. The unpopularity of Intel NICs in consumer devices alone means that sideband access is rarely possible. vPro is also limited to relatively high-end processors and chipsets. The bad news is that you will have a hard time using AMT in your homelab, although some people certainly do. The upside is that the widely-reported "fact" that Intel ME is accessible via sideband networking on consumer devices is typically untrue, and for reasons beyond Intel software licensing.

That leaves an odd question around Intel ME itself, though, which is certainly OOB management-like but doesn't really have any OOB management features without AMT. So why do nearly all processors have it? Well, this is somewhat speculative, but the impression I get is that Intel ME exists mostly as a convenient way to host and manage trusted execution components that are used for things like Secure Boot and DRM. These features all run on the same processor as ME and share some common technology stack. The "management" portion of Intel ME is thus largely vestigial, and it's part of the secure computing infrastructure.

This is not to make excuses for Intel ME, which is entirely unauditable by third parties and has harbored significant security vulnerabilities in the past. But, remember, we all use one processor architecture from one of two vendors, so Intel doesn't have a whole lot of motivation to do better. Lest you respond that ARM is the way, remember that modern ARM SOCs used in consumer devices have pretty much identical capabilities.

It is what it is.

[1] The definition of "headless" is sticky and we have to not get stuck on it too much. People tend to say "headless" to mean no monitor and keyboard attached, but keep in mind that slide-out rack consoles and IP KVMs have been common for a long time and so in non-hyperscale environments truly headless machines are rarer than you would think. Part of this is because using a serial console is a monumental pain in the ass, so your typical computer operator will do a lot to avoid dealing with it. Before LCD displays, this meant a CRT and keyboard on an Anthro cart with wheels, but now that we are an enlightened society, you can cram a whole monitor and keyboard into 1U and get a KVM switching fabric that can cover the whole rack. Or swap cables. Mostly swap cables.

Read the whole story
brennen
2 days ago
reply
Boulder, CO
Share this story
Delete

The Pull Request

1 Comment

A brief and biased history.

Oh yeah, there’s pull requests now

– GitHub blog, Sat, 23 Feb 2008

When GitHub launched, it had no code review.

Three years after launch, in 2011, GitHub user rtomayko became the first person to make a real code comment, which read, in full: “+1”.

Before that, GitHub lacked any way to comment on code directly.

Instead, pull requests were a combination of two simple features:

  1. Cross repository compare view – a feature they’d debuted in 2010—git diff in a web page.
  2. A comments section – a feature most blogs had in the 90s. There was no way to thread comments, and the comments were on a different page than the diff.
GitHub pull requests circa 2010. This is from the official documentation on GitHub.
GitHub pull requests circa 2010. This is from the official documentation on GitHub.

Earlier still, when the pull request debuted, GitHub claimed only that pull requests were “a way to poke someone about code”—a way to direct message maintainers, but one that lacked any web view of the code whatsoever.

For developers, it worked like this:

  • Make a fork.
  • Click “pull request”.
  • Write a message in a text form.
  • Send the message to someone1 with a link to your fork.
  • Wait for them to reply.

In effect, pull requests were a limited way to send emails to other GitHub users.

Ten years after this humble beginning—seven years after the first code comment—when Microsoft acquired GitHub for $7.5 Billion, this cobbled-together system known as “GitHub flow” had become the default way to collaborate on code via Git.

And I hate it.

Pull requests were never designed. They emerged. But not from careful consideration of the needs of developers or maintainers.

Pull requests work like they do because they were easy to build.

In 2008, GitHub’s developers could have opted to use git format-patch instead of teaching the world to juggle branches. Or they might have chosen to generate pull requests using the git request-pull command that’s existed in Git since 2005 and is still used by the Linux kernel maintainers today2.

Instead, they shrugged into GitHub flow, and that flow taught the world to use Git.

And commit histories have sucked ever since.

For some reason, github has attracted people who have zero taste, don’t care about commit logs, and can’t be bothered.

– Linus Torvalds, 2012


  1. “Someone” was a person chosen by you from a checklist of the people who had also forked this repository at some point.↩︎

  2. Though to make small, contained changes you’d use git format-patch and git am.↩︎

Read the whole story
brennen
3 days ago
reply
"And I hate it."
Boulder, CO
Share this story
Delete

https://sarahcandersen.com/post/760703958615457792

1 Comment and 2 Shares
Read the whole story
brennen
3 days ago
reply
Boulder, CO
Share this story
Delete
1 public comment
Groxx
3 days ago
reply
*pans to the left*

*neighbor has Christmas stuff out already*
Silicon Valley, CA

Saturday Morning Breakfast Cereal - Life

1 Comment and 3 Shares


Click here to go see the bonus panel!

Hovertext:
Reading astrobiology ruined the universe for me.


Today's News:
Read the whole story
brennen
13 days ago
reply
Boulder, CO
Share this story
Delete
1 public comment
jlvanderzwan
12 days ago
reply
Starfishes and other radially symmetric life-forms: "Am I a joke to you?"
Next Page of Stories