大语言模型 (LLM)

关于 GPT、Claude、Llama、Gemini 等大语言模型的最新资讯、技术突破与行业应用。

阿里千问大模型换将,32岁林俊旸官宣告别
LLM

阿里千问大模型换将,32岁林俊旸官宣告别

阿里巴巴旗下大模型“阿里千问”发生核心人事变动,32岁的负责人林俊旸正式官宣告别。此次换将被视为阿里在人工智能领域进行战略或组织调整的关键信号,可能旨在优化资源配置、加速AI技术突破与产品化进程,以应对日益激烈的市场竞争。

阿里千问大模型换将,32岁林俊旸官宣告别
LLM

阿里千问大模型换将,32岁林俊旸官宣告别

阿里云近日宣布进行新一轮组织架构调整,旨在聚焦云计算基础设施、人工智能大模型及企业级服务三大核心业务,并强化“云+AI”一体化战略。此次调整反映了全球云服务市场从规模扩张转向精细化、盈利性运营的趋势,旨在提升组织敏捷性以应对亚马逊AWS、微...

阿里千问大模型换将,32岁林俊旸官宣告别
LLM

阿里千问大模型换将,32岁林俊旸官宣告别

微软宣布重大组织架构调整,成立全新的Microsoft AI部门,由谷歌DeepMind联合创始人穆斯塔法·苏莱曼领导。该部门将整合Copilot、Bing搜索引擎和Edge浏览器等核心AI产品线,旨在集中资源加速创新,以更统一的方式应对与...

阿里千问大模型换将,32岁林俊旸官宣告别
LLM

阿里千问大模型换将,32岁林俊旸官宣告别

近期全球多家领先科技公司宣布大规模组织架构调整,核心趋势是整合分散的AI研发团队成立独立部门,并剥离非核心业务以提升运营效率。此次调整发生在宏观经济不确定性增强的背景下,超过60%的科技CEO将“提升组织效率”列为首要任务。专家认为,组织架...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

阿里巴巴通义千问大模型负责人林俊旸提出离职后,阿里集团及阿里云高层迅速召开会议,由董事长兼CEO吴泳铭、首席人才官蒋芳及阿里云CTO周靖人等明确回应,强调通义千问业务并未收缩,而是团队扩张,与内部斗争无关,集团计划投入更多资源支持其发展。此...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

阿里巴巴集团高层紧急召开会议,针对千问大模型负责人林俊旸离职事件明确回应,强调此次调整是团队的扩张与强化,并非业务收缩或内部政治斗争。管理层承诺将投入更多资源支持千问大模型的发展,旨在稳定军心并加速其从技术研发到产业应用的转化。此次快速定调...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

阿里巴巴集团高层就通义千问(Qwen)大模型负责人林俊旸离职召开紧急会议,明确表示此次调整是团队扩张的一部分,与内部政治斗争无关,并承诺未来将投入更多算力、人才和研发资源支持Qwen模型发展。会议旨在稳定军心,传递阿里持续加码AI大模型的战...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

阿里巴巴旗下大模型千问(Qwen)负责人林俊旸提出离职后,阿里高层紧急召开会议回应,否认项目收缩或内部政治斗争,并强调此次是“团队扩张”,未来将向大模型与人工智能领域投入更多资源。领导接任方案仍在讨论中,具体由谁接管团队尚未最终确定。

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

在阿里巴巴通义千问大模型负责人林俊旸提出离职后,阿里集团及阿里云高层迅速召开内部会议,明确回应核心团队调整与未来战略方向。会议由董事长兼CEO吴泳铭、首席人才官蒋芳及阿里云CTO周靖人等参与,定性此次调整为团队扩张而非收缩,并强调公司将持续...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑

针对阿里云通义千问(Qwen)大模型团队负责人林俊旸离职引发的猜测,阿里巴巴集团高层紧急召开会议并明确回应,强调此次调整是“团队扩张”而非战略收缩。管理层包括董事长兼CEO吴泳铭、首席人才官蒋芳及阿里云CTO周靖人一致表示,公司未来将投入更...

亚马逊正探索为其他应用提供AI聊天机器人广告技术
LLM

亚马逊正探索为其他应用提供AI聊天机器人广告技术

亚马逊正在研究一项新的广告技术服务,旨在帮助第三方应用和网站在其AI聊天机器人界面中嵌入和投放广告。此举是该公司将其广告生态延伸至生成式AI应用场景的关键战略,以应对用户行为从关键词搜索向自然语言对话的变迁。亚马逊的广告业务在2023年创造...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 |  智能涌现独家
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 | 智能涌现独家

阿里巴巴通义千问大模型技术负责人林俊旸于3月4日凌晨突然宣布离职,引发团队内部巨大震动。阿里高层紧急召开会议,将此次组织调整定性为旨在扩充人才和资源的“团队扩张”,并强调千问基础模型是集团当前最重要的事项。此次人事地震涉及多位核心骨干,业内...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 |  智能涌现独家
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 | 智能涌现独家

阿里巴巴通义千问大模型技术负责人林俊旸于3月4日突然提出离职,引发多位核心成员跟随,导致团队震荡。阿里巴巴集团CEO吴泳铭紧急召开全员会议,强调千问基础模型是集团当前最重要且必须取胜的战场,并定性此次调整为团队扩张而非收缩。此次事件凸显了顶...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 |  智能涌现独家
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 | 智能涌现独家

2025年3月4日,阿里巴巴通义千问大模型技术负责人林俊旸突然宣布离职,引发阿里内部紧急高层会议。CEO吴泳铭等高管定性此次组织调整为“团队扩张”,并重申千问基础模型研发是集团最高优先级事项。此次核心人才动荡可能导致Qwen模型研发延误半年...

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 |  智能涌现独家
LLM

千问模型负责人林俊旸提出离职,阿里高管紧急答疑 | 智能涌现独家

2025年3月4日,阿里巴巴千问(Qwen)大模型技术负责人林俊旸突然宣布离职,引发团队震荡及多位核心成员追随。阿里高层召开紧急会议,定性此次组织调整为旨在扩充人才和资源的扩张,并承诺千问基础模型是集团最重要事项。此次事件暴露了顶尖AI人才...

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
LLM

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

CORE (Concept-Oriented REinforcement) is a novel reinforcement learning framework designed to address the conceptual rea...

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
LLM

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

CORE (Concept-Oriented Reinforcement) is a novel reinforcement learning framework that addresses the conceptual reasonin...

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
LLM

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

CORE (Concept-Oriented REinforcement) is a novel AI training framework designed to teach large language models genuine c...

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
LLM

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

CORE (Concept-Oriented Reinforcement) is a novel AI training framework that addresses the conceptual reasoning gap in la...

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning
LLM

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

CORE (Concept-Oriented REinforcement) is a novel AI training framework that addresses the conceptual reasoning gap in la...

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute
LLM

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers introduced Boinflower theory, analyzing the asymptotic performance limits of majority-voting in large langua...

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute
LLM

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

New research introduces Best-of-Infinity (Bo∞), an adaptive generation framework that efficiently approximates infinite ...

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute
LLM

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

The Best-of-Infinity (BoI) framework represents the theoretical performance ceiling for large language models using majo...

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute
LLM

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers have introduced a theoretical framework called bo∞ (best-of-infinity) that analyzes the asymptotic performan...

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute
LLM

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers have developed a Best-of-N (BoN) framework for LLM output selection that analyzes infinite majority voting l...

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections
LLM

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a novel system for efficient semantic analysis of large document collections using Large Language Models (LL...

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections
LLM

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a novel system for efficiently scaling LLM-based semantic filtering over large document collections. The arc...

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections
LLM

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a novel system that enables efficient semantic filtering of large document collections using Large Language ...

Proper losses regret at least 1/2-order
LLM

Proper losses regret at least 1/2-order

A new study (arXiv:2407.10417v2) proves that for strictly proper loss functions, the convergence rate of probability est...

Proper losses regret at least 1/2-order
LLM

Proper losses regret at least 1/2-order

A new study (arXiv:2407.10417v2) proves that strictly proper loss functions in machine learning guarantee non-vacuous bo...

Proper losses regret at least 1/2-order
LLM

Proper losses regret at least 1/2-order

Research establishes that strictly proper loss functions are necessary for deriving meaningful bounds on surrogate regre...

Proper losses regret at least 1/2-order
LLM

Proper losses regret at least 1/2-order

Research from arXiv:2407.10417v2 establishes that strictly proper loss functions guarantee non-vacuous bounds on surroga...

FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation
LLM

FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation

FlexGuard is a novel LLM-based content moderation system that outputs continuous risk scores instead of binary classific...

FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation
LLM

FlexGuard: Continuous Risk Scoring for Strictness-Adaptive LLM Content Moderation

FlexGuard is a novel AI moderation system that generates continuous risk scores instead of binary safe/harmful classific...

Tell Me What To Learn: Generalizing Neural Memory to be Controllable in Natural Language
LLM

Tell Me What To Learn: Generalizing Neural Memory to be Controllable in Natural Language

Researchers have developed a novel neural memory system that allows AI models to perform instruction-based memory update...

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO
LLM

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO

A novel three-stage curriculum learning framework successfully distills Chain-of-Thought reasoning from large language m...

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO
LLM

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO

A novel three-stage curriculum learning framework successfully distills complex Chain-of-Thought reasoning from large la...

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO
LLM

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO

A novel three-stage curriculum learning framework enables efficient distillation of Chain-of-Thought reasoning from larg...

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO
LLM

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO

A novel three-stage curriculum learning framework efficiently distills Chain-of-Thought (CoT) reasoning from large teach...

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks
LLM

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks

A groundbreaking study reveals that the effective dimension of a neural network's internal representations is a powerful...

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks
LLM

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks

A study analyzing 52 pretrained ImageNet models found that effective dimension, a geometric measurement of neural networ...

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks
LLM

On the Relationship Between Representation Geometry and Generalization in Deep Neural Networks

A groundbreaking study reveals that effective dimension, a geometric property of neural network representations, strongl...

QiMeng-CRUX: Narrowing the Gap Between Natural Language and Verilog via Core Refined Understanding eXpression for Circuit Design
LLM

QiMeng-CRUX: Narrowing the Gap Between Natural Language and Verilog via Core Refined Understanding eXpression for Circuit Design

CRUX-V is a novel AI framework that significantly improves hardware description language (HDL) code generation from natu...

QiMeng-CRUX: Narrowing the Gap Between Natural Language and Verilog via Core Refined Understanding eXpression for Circuit Design
LLM

QiMeng-CRUX: Narrowing the Gap Between Natural Language and Verilog via Core Refined Understanding eXpression for Circuit Design

CRUX-V is a novel AI framework that bridges the gap between natural language descriptions and Verilog hardware code thro...

SURFACEBENCH: A Geometry-Aware Benchmark for Symbolic Surface Discovery
LLM

SURFACEBENCH: A Geometry-Aware Benchmark for Symbolic Surface Discovery

SURFACEBENCH is the first benchmark designed to evaluate artificial intelligence's ability to discover the symbolic equa...

SURFACEBENCH: A Geometry-Aware Benchmark for Symbolic Surface Discovery
LLM

SURFACEBENCH: A Geometry-Aware Benchmark for Symbolic Surface Discovery

SURFACEBENCH is the first comprehensive benchmark designed to evaluate artificial intelligence in discovering symbolic e...

Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach
LLM

Policy Transfer for Continuous-Time Reinforcement Learning: A (Rough) Differential Equation Approach

A groundbreaking study provides the first theoretical proof that policy transfer techniques can be successfully applied ...

Post-hoc Stochastic Concept Bottleneck Models
LLM

Post-hoc Stochastic Concept Bottleneck Models

Post-hoc Stochastic Concept Bottleneck Models (PSCBMs) are a novel interpretable AI method that enhances existing Concep...

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward
LLM

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward

Researchers have developed DPH-RL (Diversity-Preserving Hybrid Reinforcement Learning), a novel framework that addresses...

Tailored Behavior-Change Messaging for Physical Activity: Integrating Contextual Bandits and Large Language Models
LLM

Tailored Behavior-Change Messaging for Physical Activity: Integrating Contextual Bandits and Large Language Models

A novel hybrid AI system combining contextual multi-armed bandit algorithms with large language models has demonstrated ...

Know When to Abstain: Optimal Selective Classification with Likelihood Ratios
LLM

Know When to Abstain: Optimal Selective Classification with Likelihood Ratios

Researchers have developed a new selective classification framework by applying the Neyman-Pearson lemma, treating abste...

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
LLM

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models

Landscape of Thoughts (LoT) is a novel visualization framework that converts textual reasoning steps from large language...

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
LLM

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models

Landscape of Thoughts (LoT) is a novel visualization tool that transforms large language model reasoning steps into two-...

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
LLM

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models

Landscape of Thoughts (LoT) is a novel visualization tool that creates landscape maps of large language model reasoning ...

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
LLM

Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models

Landscape of Thoughts (LoT) is a novel visualization tool that transforms the textual reasoning chains of large language...

Absolute abstraction: a renormalisation group approach
LLM

Absolute abstraction: a renormalisation group approach

A new theoretical framework challenges conventional AI wisdom by demonstrating that abstraction depends fundamentally on...

Absolute abstraction: a renormalisation group approach
LLM

Absolute abstraction: a renormalisation group approach

A new theoretical framework challenges conventional wisdom by demonstrating that abstraction in neural networks depends ...

Absolute abstraction: a renormalisation group approach
LLM

Absolute abstraction: a renormalisation group approach

A new theoretical and experimental study challenges conventional AI wisdom by demonstrating that absolute abstraction in...

Absolute abstraction: a renormalisation group approach
LLM

Absolute abstraction: a renormalisation group approach

A new study challenges conventional AI wisdom by demonstrating that true abstraction requires both neural network depth ...

A Reinforcement Learning Approach in Multi-Phase Second-Price Auction Design
LLM

A Reinforcement Learning Approach in Multi-Phase Second-Price Auction Design

Researchers developed the CLUB algorithm for optimizing reserve prices in multi-phase second-price auctions using reinfo...