Search

- Health effects associated with consumption of unprocessed red meat: a Burden of Proof study - 2022


Characterizing the potential health effects of exposure to risk factors such as red meat consumption is essential to inform health policy and practice. Previous meta-analyses evaluating the effects of red meat intake have generated mixed findings and do not formally assess evidence strength. Here, we conducted a systematic review and implemented a meta-regression— relaxing conventional log-linearity assumptions and incorporating between-study heterogeneity—to evaluate the relation-ships between unprocessed red meat consumption and six potential health outcomes. We found weak evidence of association between unprocessed red meat consumption and colorectal cancer, breast cancer, type 2 diabetes and ischemic heart disease. Moreover, we found no evidence of an association between unprocessed red meat and ischemic stroke or hemorrhagic stroke. We also found that while risk for the six outcomes in our analysis combined was minimized at 0 g unprocessed red meat intake per day, the 95% uncerta

Evidence of a social evaluation penalty for using AI
Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools

Evidence of a social evaluation penalty for using AI
cross-posted from: https://lemmy.ml/post/30013197
Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will b

Evidence of a social evaluation penalty for using AI
cross-posted from: https://lemmy.ml/post/30013147
Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will b

Evidence of a social evaluation penalty for using AI
Significance
As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.
Abstract
Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools
A new study out in the journal Nature Communications Earth & Environment finds that wildfires fueled by climate change are linked to as many as thousands of annual deaths and billions of dollars in economic burden.
Wildfires driven by climate change contribute to as many as thousands of annual deaths and billions of dollars in economic costs from wildfire smoke in the United States, according to a new study.
The paper, published Friday in the journal Nature Communications Earth & Environment, found that from 2006 to 2020, climate change contributed to about 15,000 deaths from exposure to small particulate matter from wildfires and cost about $160 billion. The annual range of deaths was 130 to 5,100, the study showed, with the highest in states such as Oregon and California.
“We’re seeing a lot more of these wildfire smoke events,” said Nicholas Nassikas, a study author and a physician and professor of medicine at Harvard Medical School. So he and multidisciplinary team of researchers wanted to know: “What does it really mean in a changing environment for things like mortality, which is kind of the worst possible health outcome?”
Ar
A new study out in the journal Nature Communications Earth & Environment finds that wildfires fueled by climate change are linked to as many as thousands of annual deaths and billions of dollars in economic burden.
Wildfires driven by climate change contribute to as many as thousands of annual deaths and billions of dollars in economic costs from wildfire smoke in the United States, according to a new study.
The paper, published Friday in the journal Nature Communications Earth & Environment, found that from 2006 to 2020, climate change contributed to about 15,000 deaths from exposure to small particulate matter from wildfires and cost about $160 billion. The annual range of deaths was 130 to 5,100, the study showed, with the highest in states such as Oregon and California.
“We’re seeing a lot more of these wildfire smoke events,” said Nicholas Nassikas, a study author and a physician and professor of medicine at Harvard Medical School. So he and multidisciplinary team of researchers wanted to know: “What does it really mean in a changing environment for things like mortality, which is kind of the worst possible health outcome?”
Ar

A new study out in the journal Nature Communications Earth & Environment finds that wildfires fueled by climate change are linked to as many as thousands of annual deaths and billions of dollars in economic burden.
Wildfires driven by climate change contribute to as many as thousands of annual deaths and billions of dollars in economic costs from wildfire smoke in the United States, according to a new study.
The paper, published Friday in the journal Nature Communications Earth & Environment, found that from 2006 to 2020, climate change contributed to about 15,000 deaths from exposure to small particulate matter from wildfires and cost about $160 billion. The annual range of deaths was 130 to 5,100, the study showed, with the highest in states such as Oregon and California.
“We’re seeing a lot more of these wildfire smoke events,” said Nicholas Nassikas, a study author and a physician and professor of medicine at Harvard Medical School. So he and multidisciplinary team of researchers wanted to know: “What does it really mean in a changing environment for things like mortality, which is kind of the worst possible health outcome?”
Ar


**Absolute Zero: Reinforced Self-play Reasoning with Zero Data** Abstract: > Reinforcement learning with verifiable rewards (RLVR) has shown...

This is an automated archive made by the Lemmit Bot.
The original was posted on /r/singularity by /u/FeathersOfTheArrow on 2025-05-07 07:03:43+00:00.
Absolute Zero: Reinforced Self-play Reasoning with Zero Data
Abstract:
Reinforcement learning with verifiable rewards (RLVR) has shown promise in enhancing the reasoning capabilities of large language models by learning directly from outcome-based rewards. Recent RLVR works that operate under the zero setting avoid supervision in labeling the reasoning process, but still depend on manually curated collections of questions and answers for training. The scarcity of high-quality, human-produced examples raises concerns about the long-term scalability of relying on human supervision, a challenge already evident in the domain of language model pretraining

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? [paper and related material with empirical data supporting the hypothesis that current reinforcemen...

From [the project page for the work](https://limit-of-rlvr.github.io/): >Recent breakthroughs in reasoning-focused large language models (LLMs)...
![Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? [paper and related material with empirical data supporting the hypothesis that current reinforcement learning techniques elicit abilities already present in base language models]](https://lazysoci.al/api/v3/image_proxy?url=https%3A%2F%2Fwww.redditstatic.com%2Fnew-icon.png&format=webp)
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/singularity by /u/Wiskkey on 2025-04-22 16:03:39+00:00.
Original Title: Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? [paper and related material with empirical data supporting the hypothesis that current reinforcement learning techniques elicit abilities already present in base language models]
From the project page for the work:
Recent breakthroughs in reasoning-focused large language models (LLMs) like OpenAI-o1, DeepSeek-R1, and Kimi-1.5 have largely relied on Reinforcement Learning with Verifiable Rewards (RLVR), which replaces human annotations with automated rewards (e.g., verified math solutions or passing code tests

Our brains are filling with more and more microplastics, study shows | The Washington Post
A paper published Monday in Nature Medicine found that the tiny fragments of plastic are passing the blood-brain barrier and into human brains, and the amount of microplastics in the brain appears to be increasing over time. The concentration of microplastics in analyzed brains rose by about 50 percent from 2016 to 2024.

Clearance of p16Ink4a-positive senescent cells delays ageing-associated disorders

Advanced age is the main risk factor for most chronic diseases and functional deficits in humans, but the fundamental mechanisms that drive ageing remain largely unknown, impeding the development of interventions that might delay or prevent ...

This paper covers a potential causal link between cellular senescence and various aging phenotypes. Removing senescent cells exhibiting the kinase inhibitor and senescence biomarker p16 delayed the onset of age-related phenotypes in mouse skeletal muscle, adipose, and eye tissues.

Implausibility of radical life extension in humans in the 21st century
Interesting analysis on longterm trends in human life expectancy

Thinking about the Thymus
Full disclosure, this is outside my area of expertise (whatever that means…).
I want to talk about the thymus and its importance in aging. I recently came across a fascinating paper that builds on a model of human lymphopoiesis across development and aging, and I wanted to share it with you all: (https://pubmed.ncbi.nlm.nih.gov/38908962).
The thymus plays a key role in the immune system, especially in the production and maturation of T-cells, which are crucial for immune responses. One of the things that really piqued my interest is how the paper discusses developmental transitions in the thymus and how these changes potentially affect the immune system throughout life. It’s especially interesting how thymic involution with age may impact immune health, and how this could tie into the overall aging process.
To me, it's wild that the thymus pretty much "dies" before we’re even out of our teens... Seriously, look at Figure 5. This idea has kept me up at night for about a decade now. A

Post Format
Hi, all!
Excited to see this community grow. Still figuring out Lemmy, so thanks for bearing with us. I want posts to have some form of identifier for ease in finding things so lets start every post with a tag before the title. Here are a few tag rules for now.
TAGS
- [QUESTION] - for posts directly asking input from the community.
- [PAPER] - for posts about specific papers
- [REVIEW] - for posts reviewing scientific literature, make sure to cite all papers discussed
- [DISCUSSION] - for a more general post around any given topic or multiple topics
FORMAT

If none of the tags seem to fit, a descriptive title will suffice.
Thanks!

Safety and efficacy of rapamycin on healthspan metrics after one year: PEARL Trial Results

Rapamycin has been shown to have longevity-enhancing effects in murine models, but clinical data on its gerotherapeutic effects in humans remains limited. We performed a 48-week double-blinded, randomized, and placebo-controlled decentralized study (Participatory Evaluation of Aging with Rapamycin f...

This is a preprint, which means that the article has not been peer-reviewed yet. This is all part of the normal process, researchers will often present their findings before their work is published.
Here the deets!
The AgelessRx-sponsored Participatory Evaluation of Aging with Rapamycin for Longevity (PEARL) trial was a 48-week randomized, double-blind, placebo-controlled trial investigating the safety and potential efficacy of different intermittent rapamycin doses for mitigating signs of aging.
More info!

NoCha: a benchmark for long-context language models that measures claim verification about recent fiction books. Paper: 'One Thousand and One Pairs: A "novel" challenge for long-context language m...
Posted in r/LocalLLaMA by u/Wiskkey • 36 points and 7 comments
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/singularity by /u/Wiskkey on 2024-06-28 15:35:35+00:00.
Original Title: NoCha: a benchmark for long-context language models that measures claim verification about recent fiction books. Paper: 'One Thousand and One Pairs: A "novel" challenge for long-context language models'.
From A Novel Challenge for long-context language models:
NoCha measures how well long-context language models can verify claims written about fictional books. Check out our paper and GitHub repo for more details.
About the benchmark: NoCha contains 1001 narrative minimal pairs written about recently-published novels, where one claim is true and the other is false. Given the book text and a c

Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in SOTA Large Language Models

Large Language Models (LLMs) are often described as being instances of foundation models - that is, models that transfer strongly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict function improvement when increasing the pre-training scal...

"Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?"
The problem has a light quiz style and is arguably no challenge for most adult humans and probably to some children.
The scientists posed varying versions of this simple problem to various State-Of-the-Art LLMs that claim strong reasoning capabilities. (GPT-3.5/4/4o , Claude 3 Opus, Gemini, Llama 2/3, Mistral and Mixtral, including very recent Dbrx and Command R+)
They observed a strong collapse of reasoning and inability to answer the simple question as formulated above across most of the tested models, despite claimed strong reasoning capabilities. Notable exceptions are Claude 3 Opus and GPT-4 that occasionally manage to provide correct responses.
This breakdown can be considered to be dramatic not only because it happens on such a seemingly simple problem, but also because models tend to express strong overconfidence in reporting their wrong solutions as correct, while often providing

TypeLoom: Gradual Typing with the LSP

Weaving LSP-based Optional Typing into Dynamic Languages - frroossst/TypeLoom

Abstract (emphasis added):
This paper introduces TypeLoom, a tool utilising a novel approach to add gradual, optional typing into legacy code bases of dynamically typed languages. TypeLoom leverages the Language Server Protocol (LSP) to provide in-editor type information through inlay hints and collect subsequent through code actions to type information. This approach differs from the ones that exist in so far as it requires no syntactical changes to add type hints (like in Python, TypeScript) and it does not require syntactically correct comments to provide type information (like in @ts-docs and Ruby). TypeLoom utilises a graph based data structure to provide type inference and type checking. This graph-based approach is particularly effective for gradual typing as it allows flexible representation of type relationships and dependencies.

Researchers Showcase Decentralized AI-Powered Torrent Search Engine
Researchers from Delft University of Technology plan to amplify their BitTorrent client "Tribler" with decentralized AI-powered search.
Personally found this article highly interesting, it's very much worth a read. Have included the full article with images and links in the spoiler below 🌻
::: spoiler Tap me to read Full article here
Researchers from Delft University of Technology plan to amplify their BitTorrent client "Tribler" with decentralized AI-powered search. A new demo shows that generative AI models make it possible to search for content in novel ways, without restriction. The ultimate goal of the research project is to shift the Internet's power balance from governments and large corporations back to consumers.

The ability to search for and share content with complete strangers was nothing short of a revolution.
In the years that followed, media consumption swiftly moved online. This usually involved content shared without permission