Modeling and Simulation in Python.pdf
8.1 MB
Book: MODELING AND SIMULATION
IN PYTHON AN INTRODUSTENNINGERSCIENTISTS
Authors: Allen B. Downey
ISBN: 978-1-7185-0217-8
year: 2023
pages: 344
Tags:#Python #"Modeling
@Machine_learn
IN PYTHON AN INTRODUSTENNINGERSCIENTISTS
Authors: Allen B. Downey
ISBN: 978-1-7185-0217-8
year: 2023
pages: 344
Tags:#Python #"Modeling
@Machine_learn
👍8
29733376.pdf
3.4 MB
Book: Test Your Skills In Python
SECOND EDITION
Authors: SHIVANI GOEL
ISBN: 978-93-5551-181-2
year: 2023
pages: 308
Tags:#Python
@Machine_learn
SECOND EDITION
Authors: SHIVANI GOEL
ISBN: 978-93-5551-181-2
year: 2023
pages: 308
Tags:#Python
@Machine_learn
🔥7❤3👍3
Mathematics of Deep Learning.pdf
10.8 MB
Book: Mathematics of Deep Learning
Authors: Leonid Berlyand and Pierre-Emmanuel Jabin
ISBN: 978-3-11-102431-8
year: 2023
pages: 308
Tags:#Python
@Machine_learn
Authors: Leonid Berlyand and Pierre-Emmanuel Jabin
ISBN: 978-3-11-102431-8
year: 2023
pages: 308
Tags:#Python
@Machine_learn
❤4👍3
30340466.pdf
5.1 MB
Book: Blockchain Tethered AI
Trackable, Traceable Artificial Intelligence and Machine Learning
Authors: Karen Kilroy, Lynn Riley, and Deepak Bhatta
ISBN: 978-1-098-13048-0
year: 2023
pages: 307
Tags:#Python #Blockchain
@Machine_learn
Trackable, Traceable Artificial Intelligence and Machine Learning
Authors: Karen Kilroy, Lynn Riley, and Deepak Bhatta
ISBN: 978-1-098-13048-0
year: 2023
pages: 307
Tags:#Python #Blockchain
@Machine_learn
👍3
Python.for.Scientists.pdf
7.1 MB
Book: Python for Scientists
Third Edition
Authors: JOHN M. STEWART
ISBN: 978-1-119-82094-9 (ebk)
year: 2023
pages: 301
Tags:#Python
@Machine_learn
Third Edition
Authors: JOHN M. STEWART
ISBN: 978-1-119-82094-9 (ebk)
year: 2023
pages: 301
Tags:#Python
@Machine_learn
👍3
Foundational-Python-for-Data-Science_bibis.ir.pdf
16.2 MB
Book: Foundation Python for Data Scientist
Authors: Kennedy R
ISBN: Null
year: 202
pages: 686
Tags:#Python
@Machine_learn
Authors: Kennedy R
ISBN: Null
year: 202
pages: 686
Tags:#Python
@Machine_learn
👍4
aipython.pdf
2.4 MB
Book: 📚Python code for Artificial Intelligence Foundations of Computational Agents
Authors: David L. Poole and Alan K. Mackworth
year: 2024
pages: 392
Tags: #Python
@Machine_learn
Authors: David L. Poole and Alan K. Mackworth
year: 2024
pages: 392
Tags: #Python
@Machine_learn
👍7
The first channel on Telegram that offers exciting questions, answers, and tests in data science, artificial intelligence, machine learning, and programming languages.
#interviews #datascience #python
https://yangx.top/DataScienceQ
#interviews #datascience #python
https://yangx.top/DataScienceQ
Telegram
Python Data Science Jobs & Interviews
Your go-to hub for Python and Data Science—featuring questions, answers, quizzes, and interview tips to sharpen your skills and boost your career in the data-driven world.
Admin: @Hussein_Sheikho
Admin: @Hussein_Sheikho
👍4
Practical Statistics for Data Scientists.pdf
16 MB
Practical Statistics for Data Scientists
50+ Essential Concepts Using R and Python
#Python #Book
@Machine_learn
50+ Essential Concepts Using R and Python
#Python #Book
@Machine_learn
❤2
Executable Code Actions Elicit Better LLM Agents
1 Feb 2024 · Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji
Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generating #JSON or text in a pre-defined format, which is usually limited by constrained action space (e.g., the scope of pre-defined tools) and restricted flexibility (e.g., inability to compose multiple tools). This work proposes to use executable Python code to consolidate LLM agents' actions into a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions. Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that CodeAct outperforms widely used alternatives (up to 20% higher success rate). The encouraging performance of CodeAct motivates us to build an open-source #LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language. To this end, we collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. We show that it can be used with existing data to improve models in agent-oriented tasks without compromising their general capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with #Python interpreter and uniquely tailored to perform sophisticated tasks (e.g., model training) using existing libraries and autonomously self-debug.
Paper: https://arxiv.org/pdf/2402.01030v4.pdf
Codes:
https://github.com/epfllm/megatron-llm
https://github.com/xingyaoww/code-act
Datasets: MMLU - GSM8K - HumanEval - MATH
@Machine_learn
1 Feb 2024 · Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji
Large Language Model (LLM) agents, capable of performing a broad range of actions, such as invoking tools and controlling robots, show great potential in tackling real-world challenges. LLM agents are typically prompted to produce actions by generating #JSON or text in a pre-defined format, which is usually limited by constrained action space (e.g., the scope of pre-defined tools) and restricted flexibility (e.g., inability to compose multiple tools). This work proposes to use executable Python code to consolidate LLM agents' actions into a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations through multi-turn interactions. Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that CodeAct outperforms widely used alternatives (up to 20% higher success rate). The encouraging performance of CodeAct motivates us to build an open-source #LLM agent that interacts with environments by executing interpretable code and collaborates with users using natural language. To this end, we collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. We show that it can be used with existing data to improve models in agent-oriented tasks without compromising their general capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with #Python interpreter and uniquely tailored to perform sophisticated tasks (e.g., model training) using existing libraries and autonomously self-debug.
Paper: https://arxiv.org/pdf/2402.01030v4.pdf
Codes:
https://github.com/epfllm/megatron-llm
https://github.com/xingyaoww/code-act
Datasets: MMLU - GSM8K - HumanEval - MATH
@Machine_learn
❤1