


-
Machine Learning @programming.dev Nemeski @lemm.ee community.amd.com AMD Unveils Its First Small Language Model AMD-135MIn the ever-evolving landscape of artificial intelligence, large language models (LLMs) like GPT-4 and Llama have garnered significant attention for their impressive capabilities in natural language processing and generation. However, small language models (SLMs) are emerging as an essential counter...
-
Machine Learning @programming.dev Pierre-Yves Lapersonne @programming.dev Evaluation of large language model (LLM) to generate efficient Solidity code
opensource.orange.com Evaluation of large language model (LLM) to generate efficient Solidity code – Orange Open Sourcefree and open source software
-
Machine Learning @programming.dev Nemeski @lemm.ee PyTorch 2.4 Now Supports Intel GPUs for Faster Workloads
pytorch.org Accelerate Your AI: PyTorch 2.4 Now Supports Intel GPUs for Faster WorkloadsWe have exciting news! PyTorch 2.4 now supports Intel® Data Center GPU Max Series and the SYCL software stack, making it easier to speed up your AI workflows for both training and inference. This update allows for you to have a consistent programming experience with minimal coding effort and extends...
-
Machine Learning @programming.dev ericjmorey @programming.dev Learn PyTorch for Deep Learning: Zero to Mastery | Free Online Book | Daniel Bourke
www.learnpytorch.io Zero to Mastery Learn PyTorch for Deep LearningLearn important machine learning concepts hands-on by writing PyTorch code.
About this course
Who is this course for?
You: Are a beginner in the field of machine learning or deep learning or AI and would like to learn PyTorch.
This course: Teaches you PyTorch and many machine learning, deep learning and AI concepts in a hands-on, code-first way.
If you already have 1-year+ experience in machine learning, this course may help but it is specifically designed to be beginner-friendly.
What are the prerequisites?
- 3-6 months coding Python.
- At least one beginner machine learning course (however this might be able to be skipped, resources are linked for many different topics).
- Experience using Jupyter Notebooks or Google Colab (though you can pick this up as we go along).
- A willingness to learn (most important).
-
Machine Learning @programming.dev ericjmorey @programming.dev What’s Really Going On in Machine Learning? Some Minimal Models | Stephen Wolfram | August 22, 2024
writings.stephenwolfram.com What’s Really Going On in Machine Learning? Some Minimal ModelsStephen Wolfram explores minimal models and their visualizations, aiming to explain the underneath functionality of neural nets and ultimately machine learning.
-
Machine Learning @programming.dev ericjmorey @programming.dev Data Science Handbook | Curated resources (Free & Paid) to help data scientists learn, grow, and break into the field of data science | Andres Vourakis | Last update Jul 23, 2024
github.com GitHub - andresvourakis/data-scientist-handbook: Curated Data Science resources (Free & Paid) to help aspiring and experienced data scientists learn, grow, and advance their careers.Curated Data Science resources (Free & Paid) to help aspiring and experienced data scientists learn, grow, and advance their careers. - andresvourakis/data-scientist-handbook
Andres Vourakis writes:
Data Scientist Handbook 2024
Curated resources (Free & Paid) to help data scientists learn, grow, and break into the field of data science.
Even though there are hundreds of resources out there (too many to keep track of), I will try to limit them to a maximum of 5 per category to ensure you get the most valuable and relevant resources out there, plus, the whole point of this repository is to help you avoid getting overwhelmed by too many choices. This way you can focus less time researching and more time learning.
FAQs
- How is curation done? Curation is based on thorough research, recommendations from people I trust, and my years of experience as a Data Scientist.
- Are all resources free? Most resources here will be free, but I will also include paid alternatives if they are truly valuable to your career development. All paid resources include the symbol 💲.
- How often is the repository updated? I plan to come back h
-
Machine Learning @programming.dev ericjmorey @programming.dev Elements of Data Science | Allen B. Downey | July 17, 2024
www.allendowney.com Elements of Data ScienceI’m excited to announce the launch of my newest book, Elements of Data Science. As the subtitle suggests, it is about “Getting started with Data Science and Python”. Order now fro…
July 17, 2024
Allen B. Downey writes:
Elements of Data Science is an introduction to data science for people with no programming experience. My goal is to present a small, powerful subset of Python that allows you to do real work with data as quickly as possible.
Part 1 includes six chapters that introduce basic Python with a focus on working with data.
Part 2 presents exploratory data analysis using Pandas and empiricaldist — it includes a revised and updated version of the material from my popular DataCamp course, “Exploratory Data Analysis in Python.”
Part 3 takes a computational approach to statistical inference, introducing resampling method, bootstrapping, and randomization tests.
Part 4 is the first of two case studies. It uses data from the General Social Survey to explore changes in political beliefs and attitudes in the U.S. in the last 50 years. The data points on the cover are f
-
Machine Learning @programming.dev Nemeski @lemm.ee arstechnica.com Researchers upend AI status quo by eliminating matrix multiplication in LLMsRunning AI models without floating point matrix math could mean far less power consumption.
-
Machine Learning @programming.dev ericjmorey @programming.dev Meta (Facebook) is sharing new research, models, and datasets from Meta FAIR
ai.meta.com Sharing new research, models, and datasets from Meta FAIRMeta FAIR is releasing several new research artifacts. Our hope is that the research community can use them to innovate, explore, and discover new ways to apply AI at scale.
-
Machine Learning @programming.dev Akisamb @programming.dev LGGMs a new class of graph generative models trained on a large corpus of graphs
www.marktechpost.com Large Generative Graph Models (LGGMs): A New Class of Graph Generative Model Trained on a Large Corpus of GraphsLarge Generative Graph Models (LGGMs): A New Class of Graph Generative Model Trained on a Large Corpus of Graphs
-
Machine Learning @programming.dev Nemeski @lemm.ee machinelearning.apple.com Introducing Apple’s On-Device and Server Foundation ModelsAt the 2024 Worldwide Developers Conference, we introduced Apple Intelligence, a personal intelligence system integrated deeply into iOS 18…
-
Machine Learning @programming.dev ericjmorey @programming.dev Let's reproduce GPT-2 (124M) | Andrej Karpathy | Jun 9, 2024
Video description:
We reproduce the GPT-2 (124M) from scratch.
This video covers the whole process:
First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations.
Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar.
-
Machine Learning @programming.dev ericjmorey @programming.dev A Few Useful Things to Know About Machine Learning | Tapping into the "folk knowledge" needed to advance machine learning | Pedro Domingos | Communications of the ACM | vol. 55 no. 10 | October 2012
Pedro Domingos summarizes 12 key lessons that machine learning researchers and practitioners have learned. These include pitfalls to avoid, important issues to focus on, and answers to common questions.
- [Theoretical Guarantees Are Not What They Seem](https://cacm.ac
-
Machine Learning @programming.dev ericjmorey @programming.dev journals.lww.com Multicollinearity in Logistic Regression Models : Anesthesia & AnalgesiaAn abstract is unavailable.
Bayman, Emine Ozgur PhD*; Dexter, Franklin MD, PhD, FASA†. Multicollinearity in Logistic Regression Models. Anesthesia & Analgesia 133(2):p 362-365, August 2021. | DOI: 10.1213/ANE.0000000000005593
-
Machine Learning @programming.dev Akisamb @programming.dev Scallop: A Language for Neurosymbolic Programming
cross-posted from: https://lemmy.one/post/13942290
Abstract: We present Scallop, a language which combines the benefits of deep learning and logical reasoning. Scallop enables users to write a wide range of neurosymbolic applications and train them in a data- and compute-efficient manner. It achieves these goals through three key features: 1) a flexible symbolic representation that is based on the relational data model; 2) a declarative logic programming language that is based on Datalog and supports recursion, aggregation, and negation; and 3) a framework for automatic and efficient differentiable reasoning that is based on the theory of provenance semirings. We evaluate Scallop on a suite of eight neurosymbolic applications from the literature. Our evaluation demonstrates that Scallop is capable of expressing algorithmic reasoning in diverse and challenging AI tasks, provides a succinct interface for machine learning programmers to integrate logical domain knowledge, and yields so
-
Machine Learning @programming.dev ericjmorey @programming.dev Activation function and GLU variants for Transformer models | Tarique Anwar | Apr 18, 2022
medium.com Activation function and GLU variants for Transformer modelsCharacterizing the first week of April 2022 as happening in the field of AI and Deep Learning would be an understatement. Within the same…
Apr 18, 2022 | Tarique Anwar Writes:
The main reason for ReLu being used is that it is simple, fast, and empirically it seems to work well.
But with the emergence of Transformer based models, different variants of activation functions and GLU have been experimented with and do seem to perform better. Some of them are:
- GeLU²
- Swish¹
- GLU³
- GEGLU⁴
- SwiGLU⁴
We will go over some of these in detail but before that let’s see where exactly are these activations utilized in a Transformer architecture.
Read Activation function and GLU variants for Transformer models