Valuing Life as an Asset, as a Statistic and at Gunpoint
Ever wondered, how much your life is worth? This is an article about Life as an asset evaluation. It is extremely useful for insuarance companies and as a metric to calculate compensations in case of tragic events, but it is also a key to understand, how valuable (or not) life is.
Math is beautiful.
Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3156911
#math #life #insurance #statistics
Ever wondered, how much your life is worth? This is an article about Life as an asset evaluation. It is extremely useful for insuarance companies and as a metric to calculate compensations in case of tragic events, but it is also a key to understand, how valuable (or not) life is.
Math is beautiful.
Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3156911
#math #life #insurance #statistics
Alan Turing will become a face of new £50 note
That's a great acknowledgment of the man who stands behind most of the theoretical computing.
Link: https://www.bbc.com/news/business-48962557
Most famous Turing's work 'On computable numbers': https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
Turing machine: https://en.wikipedia.org/wiki/Turing_machine
#Turing #Theory #Math #history
That's a great acknowledgment of the man who stands behind most of the theoretical computing.
Link: https://www.bbc.com/news/business-48962557
Most famous Turing's work 'On computable numbers': https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
Turing machine: https://en.wikipedia.org/wiki/Turing_machine
#Turing #Theory #Math #history
Real numbers, data science and chaos: How to fit any dataset with a single parameter
Gentle reminder that measure of information is bit and that single parameter can contain more information than multiple parameters.
ArXiV: https://arxiv.org/abs/1904.12320
#cs #bits #math
Gentle reminder that measure of information is bit and that single parameter can contain more information than multiple parameters.
ArXiV: https://arxiv.org/abs/1904.12320
#cs #bits #math
The Evolution and Dependencies of Scientific Python Libraries
Numerical computing libraries like NumPy and SciPy rely on foundational mathematical code spanning decades. Until recently, NumPy depended on Fortran-based BLAS/LAPACK implementations for linear algebra operations. Modern versions now use OpenBLAS, which replaces Fortran code with optimized C implementations. SciPy, however, still incorporates Fortran 77 code for certain functionalities, such as ARPACK (used in eigenvalue computations) and FFTPACK (for Fourier transforms). These dependencies stem from legacy libraries like BLAS (1970s), LAPACK (1980s), and MINPACK (optimization), which remain widely used due to their mathematically stable, battle-tested algorithms like Simulated Annealing.
Simulated Annealing: A 1953 Algorithm in Modern ML
Imagine searching for the largest mushroom in a forest. Gradient methods risk settling for a local maximum, but Simulated Annealing (SciPy’s optimize) balances exploration and exploitation: early random “high-energy” steps avoid local traps, then gradually refines toward the global optimum.
Originally devised to model atomic behavior in molten metals (Metropolis Algorithm, 1953), it mimics annealing—slow cooling ensures uniform atomic arrangement. Scientists introduced probabilistic acceptance of suboptimal states to escape flawed structures. Thise method was adopted to optimize ML models, logistics, and pattern recognition, making the familiar Python code use bindings which are ~15 years older than Python itself.
Source: Facebook post (Ru)
#SciPy #Fortran #NumPy #Math
Numerical computing libraries like NumPy and SciPy rely on foundational mathematical code spanning decades. Until recently, NumPy depended on Fortran-based BLAS/LAPACK implementations for linear algebra operations. Modern versions now use OpenBLAS, which replaces Fortran code with optimized C implementations. SciPy, however, still incorporates Fortran 77 code for certain functionalities, such as ARPACK (used in eigenvalue computations) and FFTPACK (for Fourier transforms). These dependencies stem from legacy libraries like BLAS (1970s), LAPACK (1980s), and MINPACK (optimization), which remain widely used due to their mathematically stable, battle-tested algorithms like Simulated Annealing.
Simulated Annealing: A 1953 Algorithm in Modern ML
Imagine searching for the largest mushroom in a forest. Gradient methods risk settling for a local maximum, but Simulated Annealing (SciPy’s optimize) balances exploration and exploitation: early random “high-energy” steps avoid local traps, then gradually refines toward the global optimum.
Originally devised to model atomic behavior in molten metals (Metropolis Algorithm, 1953), it mimics annealing—slow cooling ensures uniform atomic arrangement. Scientists introduced probabilistic acceptance of suboptimal states to escape flawed structures. Thise method was adopted to optimize ML models, logistics, and pattern recognition, making the familiar Python code use bindings which are ~15 years older than Python itself.
Source: Facebook post (Ru)
#SciPy #Fortran #NumPy #Math
👍12❤4🤷♂1
Forwarded from Machinelearning
NVIDIA представила новый подход к обучению моделей для сложных математических задач, заняв первое место в конкурсе Kaggle AIMO-2.
Секрет — в огромном датасете OpenMathReasoning, который состоит из 540 тыс. уникальных задач с Art of Problem Solving, 3,2 млн. многошаговых решений (CoT) и 1,7 млн. примеров с интеграцией кода (TIR).
Для сравнения: это в разы больше, чем в популярных аналогах MATH и GSM8K. Все это дополнено 566 тыс. примеров для обучения генеративному выбору решений (GenSelect) — методу, который лучше, чем классическое голосование большинством.
OpenMathReasoning создавался тщательно и ответственно. Сначала задачи фильтровались через Qwen2.5-32B, чтобы убрать простые или дублирующие бенчмарки. Затем DeepSeek-R1 и QwQ-32B генерировали решения, а итеративная тренировка с жесткой фильтрацией улучшала качество. Например, код в TIR-решениях должен был не просто проверять шаги, а давать принципиально новые вычисления — вроде перебора вариантов или численного решения уравнений.
Модели OpenMath-Nemotron (1,5B–32B параметров), обученные на этом наборе данных показали SOTA-результаты. 14B-версия в режиме TIR решает 76,3% задач AIME24 против 65,8% у базового DeepSeek-R1. А с GenSelect, который анализирует 16 кандидатов за раз, точность взлетает до 90%. Даже 1,5B-модель с GenSelect обгоняет 32B-гиганты в отдельных тестах.
@ai_machinelearning_big_data
#AI #ML #Math #Dataset #NVIDIA
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍5🔥3❤1