Skip to content

Latest commit

 

History

History
1980 lines (1382 loc) · 118 KB

historynews.md

File metadata and controls

1980 lines (1382 loc) · 118 KB

💥 History News

2024

☄️ EgoAlpha releases the TrustGPT focuses on reasoning. Trust the GPT with the strongest reasoning abilities for authentic and reliable answers. You can click here or visit the Playgrounds directly to experience it。

[2024.9.12]

[2024.9.11]

[2024.9.10]

[2024.9.9]

[2024.9.8]

[2024.9.7]

[2024.9.6]

[2024.9.5]

[2024.9.4]

[2024.9.3]

[2024.9.2]

[2024.9.1]

[2024.8.31]

[2024.8.30]

[2024.8.29]

[2024.8.28]

[2024.8.27]

[2024.8.26]

[2024.8.25]

[2024.8.24]

[2024.8.23]

[2024.8.22]

[2024.8.21]

[2024.8.20]

[2024.8.19]

[2024.8.18]

[2024.8.17]

[2024.8.16]

[2024.8.15]

[2024.8.14]

[2024.8.13]

[2024.8.12]

[2024.8.11]

[2024.8.10]

[2024.8.9]

[2024.8.8]

[2024.8.7]

[2024.8.6]

[2024.8.5]

[2024.8.4]

[2024.8.3]

[2024.8.2]

[2024.8.1]

[2024.7.31]

[2024.7.30]

[2024.7.29] -🔥🔥🔥Paper: Wolf: Captioning Everything with a World Summarization Framework

[2024.7.28]

[2024.7.27]

[2024.7.26]

[2024.7.25]

[2024.7.24]

[2024.7.23]

[2024.7.22]

[2024.7.21]

[2024.7.20]

[2024.7.19]

[2024.7.18]

[2024.7.17]

[2024.7.16]

[2024.7.15]

[2024.7.14]

[2024.7.13]

[2024.7.12]

[2024.7.11]

[2024.7.10]

[2024.7.9]

[2024.7.8]

[2024.7.7]

[2024.7.6]

[2024.7.5]

[2024.7.4]

[2024.7.3]

[2024.7.2]

[2024.7.1]

[2024.6.30]

[2024.6.29]

[2024.6.28]

[2024.6.27]

[2024.6.26]

[2024.6.25]

[2024.6.24]

[2024.6.23]

[2024.6.22]

[2024.6.21]

[2024.6.20]

[2024.6.19]

[2024.6.18]

[2024.6.17]

[2024.6.16]

[2024.6.15]

[2024.6.14]

[2024.6.13]

[2024.6.12]

[2024.6.11]

[2024.6.10]

[2024.6.9]

[2024.6.8]

[2024.6.7]

[2024.6.6]

[2024.6.5]

[2024.6.4]

[2024.6.3]

[2024.6.2]

[2024.6.1]

[2024.5.31]

[2024.5.30]

[2024.5.29]

[2024.5.28]

[2024.5.27]

[2024.5.26]

[2024.5.25]

[2024.5.24]

[2024.5.23]

[2024.5.22]

[2024.5.21]

[2024.5.20]

[2024.5.19]

[2024.5.18]

[2024.5.17]

[2024.5.16]

[2024.5.15]

[2024.5.14]

[2024.5.13]

[2024.5.12]

[2024.5.11]

[2024.5.10]

[2024.5.9]

[2024.5.8]

[2024.5.7]

[2024.5.6]

[2024.5.5]

[2024.5.4]

[2024.5.3]

[2024.5.2]

[2024.5.1]

[2024.4.30]

[2024.4.29]

[2024.4.28]

[2024.4.27]

[2024.4.26]

[2024.4.25]

[2024.4.24]

[2024.4.23]

[2024.4.22]

[2024.4.21]

[2024.4.20]

[2024.4.19]

[2024.4.18]

[2024.4.17]

[2024.4.16]

[2024.4.15]

[2024.4.14]

[2024.4.13]

[2024.4.12]

[2024.4.11]

[2024.4.10]

[2024.4.9]

[2024.4.8]

[2024.4.7]

[2024.4.6]

[2024.4.5]

[2024.4.4]

[2024.4.3]

[2024.4.2]

[2024.4.1]

[2024.3.31]

[2024.3.30]

[2024.3.29]

[2024.3.28]

[2024.3.27]

[2024.3.26]

[2024.3.25]

[2024.3.24]

[2024.3.23]

[2024.3.22]

[2024.3.21]

[2024.3.20]

[2024.3.19]

[2024.3.18]

[2024.3.17]

[2024.3.16]

[2024.3.15]

[2024.3.14]

[2024.3.13]

[2024.3.12]

[2024.3.11]

[2024.3.10]

[2024.3.9]

[2024.3.8]

[2024.3.7]

[2024.3.6]

[2024.3.5]

[2024.3.4]

[2024.3.3]

[2024.3.2]

[2024.3.1]

[2024.2.29]

[2024.2.28]

[2024.2.27]

[2024.2.26]

[2024.2.25]

[2024.2.27]

[2024.2.26]

[2024.2.25]

[2024.2.24]

[2024.2.23]

[2024.2.22]

[2024.2.21]

[2024.2.20]

[2024.2.19]

[2024.2.18]

[2024.2.17]

[2024.2.16]

[2024.2.15]

[2024.2.14]

[2024.2.13]

[2024.2.12]

[2024.2.11]

[2024.2.10]

[2024.2.9]

[2024.2.8]

[2024.2.7]

[2024.2.6]

[2024.2.5]

[2024.2.4]

[2024.2.3]

[2024.2.2]

[2024.2.1]

[2024.1.31]

[2024.1.30]

[2024.1.29]

[2024.1.28]

[2024.1.27]

[2024.1.26]

[2024.1.25]

[2024.1.24]

[2024.1.23]

[2024.1.22]

[2024.1.21]

[2024.1.20]

[2024.1.19]

[2024.1.18]

[2024.1.17]

[2024.1.16]

[2024.1.15]

[2024.1.14]

[2024.1.13]

[2024.1.12]

[2024.1.11]

[2024.1.10]

[2024.1.9]

[2024.1.8]

[2024.1.7]

[2024.1.6]

[2024.1.5]

[2024.1.4]

[2024.1.3]

[2024.1.2]

[2024.1.1]

2023

[2023.12.31]

[2023.12.30]

[2023.12.29]

  • KwaiAgents is a series of Agent-related works open-sourced by the KwaiKEG from Kuaishou Technology 【Paper/Github

[2023.12.28]

[2023.12.27]

[2023.12.26]

[2023.12.25]

[2023.12.24]

[2023.12.23]

[2023.12.22]

[2023.12.21]

[2023.12.20]

[2023.12.19]

[2023.12.18]

[2023.12.17]

[2023.12.16]

[2023.12.15]

[2023.12.14]

[2023.12.13]

[2023.12.12]

[2023.12.11]

[2023.12.10]

[2023.12.9]

[2023.12.8]

[2023.12.7]

[2023.12.6]

[2023.12.5]

[2023.12.4]

[2023.12.3]

[2023.12.2]

[2023.12.1]

  • Peking University's newest multimodal LLM open source: trained on mixed datasets and directly used for image-video tasks without modification: 【arXiv/Demo/GitHub/HuggingFace

[2023.11.30]

[2023.11.29]

[2023.11.28]

[2023.11.27]

[2023.11.26]

[2023.11.25]

[2023.11.24]

[2023.11.23]

[2023.11.22]

[2023.11.21]

[2023.11.20]

[2023.11.19]

[2023.11.18]

[2023.11.17]

[2023.11.16]

[2023.11.15]

[2023.11.14]

[2023.11.13]

[2023.11.12]

[2023.11.11]

[2023.11.10]

[2023.11.9]

[2023.11.8]

[2023.11.7]

[2023.11.6]

  • 🔥🔥🔥 01.Ai first open source large models, the Yi series of large models: Yi-34B and Yi-6B.
  • 🔥🔥🔥 Elon Musk's xAI products in two consecutive releases: PromptIDE & Grok

[2023.11.5]

[2023.11.4]

[2023.11.3]

[2023.11.2]

[2023.11.1]

[2023.10.31]

[2023.10.30]

[2023.10.29]

[2023.10.28]

[2023.10.27]

[2023.10.26]

[2023.10.25]

[2023.10.24]

[2023.10.23]

[2023.10.22]

[2023.10.21]

[2023.10.20]

[2023.10.19]

[2023.10.18]

[2023.10.17]

[2023.10.16]

[2023.10.15]

[2023.10.14]

[2023.10.13]

[2023.10.12]

[2023.10.11]

[2023.10.10]

[2023.10.9]

[2023.10.8]

[2023.10.7]

[2023.10.6]

[2023.10.5]

[2023.10.4]

[2023.10.3]

[2023.10.2]

[2023.10.1]

[2023.9.30]

[2023.9.29]

[2023.9.28]

[2023.9.27]

[2023.9.26]

[2023.9.25]

[2023.9.24]

[2023.9.23]

[2023.9.22]

[2023.9.21]

[2023.9.20]

[2023.9.19]

[2023.9.18]

[2023.9.17]

[2023.9.16]

[2023.9.15]

[2023.9.14]

  • Can LLMs Really Reason and Plan? | blog @ CACM | Communications of the ACM【Paper/Video

[2023.9.13]

[2023.9.12]

[2023.9.11]

[2023.9.10]

[2023.9.9]

[2023.9.8]

[2023.9.7]

  • Baichuan Intelligence Releases Baichuan2 Big Model: Comprehensively Ahead of Llama2, Training Slices Also Open Source: Github/Technical Report

[2023.9.6]

[2023.9.5]

[2023.9.4]

[2023.9.3]

[2023.9.2]

[2023.9.1]

[2023.8.31]

[2023.8.30]

[2023.8.29]

[2023.8.28]

[2023.8.27]

[2023.8.26]

[2023.8.25]

[2023.8.24]

[2023.8.23]

[2023.8.22]

[2023.8.21]

[2023.8.20]

[2023.8.19]

[2023.8.18]

[2023.8.17]

[2023.8.16]

[2023.8.15]

[2023.8.14]

[2023.8.13]

[2023.8.12]

[2023.8.11]

[2023.8.10]

[2023.8.9]

[2023.8.8]

[2023.8.7]

[2023.8.6]

[2023.8.5]

[2023.8.4]

[2023.8.3]

[2023.8.2]

[2023.8.1]

[2023.7.31]

[2023.7.30]

[2023.7.29]

[2023.7.28]

[2023.7.27]

[2023.7.26]

[2023.7.25]

[2023.7.24]

[2023.7.23]

[2023.7.22]

[2023.7.21]

[2023.7.20]

  • New Architecture: RetNetwork, beyond Transformer 👉Paper👈

[2023.7.19]

[2023.7.18]

[2023.7.17]

[2023.7.16]

  • Emu model is open source, a versatile expert in 'multimodal to multimodal': Model / Demo

[2023.7.15]

[2023.7.14]

[2023.7.13]

[2023.7.12]

[2023.7.11]

[2023.7.10]

[2023.7.9]

[2023.7.8]

[2023.7.7]

[2023.7.6]

[2023.7.5]

[2023.7.4]

[2023.7.3]

[2023.7.2]

[2023.7.1]

[2023.6.30]

[2023.6.29]

[2023.6.28]

[2023.6.27]

[2023.6.26]

[2023.6.25]

[2023.6.24]

[2023.6.23]

[2023.6.22]

[2023.6.21]

[2023.6.20]

[2023.6.19]

[2023.6.18]

[2023.6.17]

[2023.6.16]

  • Financial FinGPT model open source, benchmarked against BloombergGPT, training parameters can be reduced from 6.17 billion to 3.67 million, can predict stock prices. (Paper/Code)

[2023.6.15]

[2023.6.14]

[2023.6.13]

[2023.6.12]

[2023.6.11]

[2023.6.10]

[2023.6.9]

[2023.6.8]

[2023.6.7]

[2023.6.6]

[2023.6.5]

[2023.6.4]

  • PandaGPT: One model unifies six modalities(Page/Paper)

[2023.6.3]

[2023.6.2]

[2023.6.1]

[2023.5.31]

[2023.5.30]

[2023.5.29]

[2023.5.28]

[2023.5.27]

[2023.5.26]

[2023.5.25]

[2023.5.24]

[2023.5.23]

[2023.5.22]

[2023.5.21]

[2023.5.20]

[2023.5.19]

[2023.5.18]

[2023.5.17]

[2023.5.16]

[2023.5.15]

[2023.5.14]

[2023.5.13]

[2023.5.12]

[2023.5.11]

[2023.5.10]

[2023.5.9]

[2023.5.8]

[2023.5.7]

[2023.5.6]

[2023.5.5]

[2023.5.4]

[2023.5.3]

[2023.5.2]

[2023.5.1]

[2023.4.30]

[2023.4.29]

[2023.4.28]

[2023.4.27]

[2023.4.26]

[2023.4.25]

[2023.4.24]

[2023.4.23]

[2023.4.22] Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models [Paper/Project]

[2023.4.21]

[2023.4.20]

[2023.4.19]

[2023.4.18]

[2023.4.17]

[2023.4.16]

[2023.4.15]

[2023.4.14]

[2023.4.13] Three Amazing Works:

[2023.4.12] OpenAGI: When LLM Meets Domain Experts

[2023.4.11] Why think step-by-step? Reasoning emerges from the locality of experience

[2023.4.10] TagGPT: Large Language Models are Zero-shot Multimodal Taggers

[2023.4.9] A new AI model from Meta AI: Segment Anything Model (SAM) (Paper/Code)

[2023.4.8] EleutherAI&Yale et al. proposed a large-scale language model analysis suite that spans training and extension: Pythia (Paper/Code)

[2023.4.7] Stanford releases the 7 billion parameter open-source model Vicuna-7B, which is compact, efficient, but powerful in functionality

[2023.4.6] Effective Theory of Transformers at Initialization

[2023.4.5] REFINER: Reasoning Feedback on Intermediate Representations

[2023.4.4] Where are we in the search for an Artificial Visual Cortex for Embodied Intelligence?

[2023.4.3] Self-Refine: Iterative Refinement with Self-Feedback

[2023.4.1] A survey of Large Language Models

[2023.3.31] BloombergGPT: A Large Language Model for Finance

[2023.3.30] GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment

[2023.3.29] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention

[2023.3.28] ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks

[2023.3.27] Scaling Expert Language Models with Unsupervised Domain Discovery

[2023.3.26] CoLT5: Faster Long-Range Transformers with Conditional Computation

[2023.3.23] OpenAI announces 'Plug-ins' for ChatGPT that enable it to perform actions beyond text.

[2023.3.22] GitHub launches Copilot X, aiming at the future of AI-powered software development.

[2023.3.21] Google Bard is now available in the US and UK, w/ more countries to come.

[2023.3.20] OpenAI’s new paper looks at the economical impact of LLMs+Labor Market.GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

[2023.3.17] Microsoft 365 Copilot released. Word, Excel, PowerPoint, Outlook powered by LLMs.

[2023.3.16] Baidu announcing the LLM named "文心一言"(ERNIE3.0 + PLATO)

[2023.3.15] Two Breaking News: - Announcing GPT4 by OpenAI from Microsoft. Paper🔗 - Announcing PaLM API by Google.

[2023.3.13] LLaMA has been fine-tuned by Stanford

[2023.3.10] Announcing OpenChatKit by Together

[2023.3.9] GPT-4 is coming next week and it will be multimodal,announced by OpenAI.

[2023.3.8] Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models

[2023.3.7] Larger language models do in-context learning differently

[2023.3.6] Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning