All Activity
- Yesterday
-
hendrik joined the community
- Earlier
-
Hello everyone, I’m preparing for a deep learning job interview and want to make sure I’m well-equipped for both theoretical and practical questions. I’d love to hear from those who have gone through deep learning interviews recently or have experience conducting them. What are some of the most commonly asked deep learning interview questions? I assume questions on neural network architectures, backpropagation, optimization techniques, and loss functions will come up, but I’d like to dig deeper. Here are a few specific areas where I’d appreciate guidance: Conceptual Questions: What fundamental deep learning topics do interviewers focus on the most? Are there any tricky theoretical questions that often catch candidates off guard? Mathematical & Algorithmic Understanding: How in-depth do interviews typically go into topics like gradient descent variants, activation functions, or regularization techniques? Any recommended resources for brushing up on key mathematical concepts? Hands-On & Practical Questions: How often are candidates asked to code neural networks from scratch versus using frameworks like TensorFlow or PyTorch? What kind of debugging or model improvement questions are commonly asked? Case Study/Scenario-Based Questions: Are there typical real-world problem statements used in interviews? How should one approach answering questions related to model deployment and scaling? I’d really appreciate any advice, sample questions, or personal experiences you can share. Thanks in advance for your help! Looking forward to your insights.
-
josephazizi joined the community
-
Most of the people can't detect the seriousness of the matter
-
jads joined the community
-
Really interesting discussion on the parallels between DeepSeek's innovative approach and the disruptive strategies seen in China's EV playbook. The way both are challenging traditional models and pushing for rapid adaptation is truly fascinating. For those looking to dive deeper into this comparison, Click here.
-
Jeremiah joined the community
-
Ronald joined the community
-
Jeromefot joined the community
-
wittyperceptron joined the community
-
whitesledd joined the community
-
coder_freak joined the community
-
NilambaVala joined the community
-
-
harsh started following Whats the best way to start with learning Multimodal models?
-
I have been wanting to dive into Multimodal AI but cannot find starting point. Would love to have communities opinion on this.
-
https://www.livemint.com/ai/a-decisive-ai-breakthrough-is-about-to-transform-the-world-11731755085418.html
-
Confirming EB-1 country-wise limits and resultant backlogs
Aman replied to Koonaal's topic in Profile Prep
This is correct. -
Is it correct that there is a country-wise cap on the EB-1 applications considered (which results in delays in processing newer applications from countries from where there is relatively more interest)?
-
Google is hosting a comprehensive course on Generative AI from November 11th. Everyone interested in participating needs a kaggle account and a google account. These are the topics overview: Day 1: Foundational Models & Prompt Engineering - Explore the evolution of LLMs, from transformers to techniques like fine-tuning and inference acceleration. Get trained with the art of prompt engineering for optimal LLM interaction. Day 2: Embeddings and Vector Stores/Databases - Learn about the conceptual underpinning of embeddings and vector databases, including embedding methods, vector search algorithms, and real-world applications with LLMs, as well as their tradeoffs. Day 3: Generative AI Agents - Learn to build sophisticated AI agents by understanding their core components and the iterative development process. Day 4: Domain-Specific LLMs - Delve into the creation and application of specialized LLMs like SecLM and Med-PaLM, with insights from the researchers who built them. Day 5: MLOps for Generative AI - Discover how to adapt MLOps practices for Generative AI and leverage Vertex AI's tools for foundation models and generative AI applications. Register here: https://rsvp.withgoogle.com/events/google-generative-ai-intensive (you’ll receive a badge on your Kaggle profile upon course completion!)
-
-
Emiliano Volpi started following subin_vidhu
-
Emiliano Volpi started following Aman
-
Thanks for sharing, @subin_vidhu! This post is better fit for our AI News section so I'll moving it there. The announcements section is intended for our community's announcements.
- 1 reply
-
- 1
-
-
Welcome, @Anirudh! Glad to have you onboard and look forward to your posts on NeuralNets 🙂
-
Welcome, Anirudh!
-
Hello Everyone, I'm Anirudh 🙂 an MS in Data Science student from Indiana University, Bloomington. I'm very passionate about AI and I am currently working on the Deep Learning Specialization by DeepLearning.ai and I actively use Aman's website for the notes. I found this community through Aman's LinkedIn post and was hoping to connect with people to learn and get my foot into the AI industry. Excited to be a part of this community. An
-
Hello everyone, Hope you're all having a fun time learning and applying AI. I recently completed the first 3 courses in the Deep Learning Specialization by DeepLearning.ai. In order to start applying what I learned, I started by applying the Iris dataset on a shallow neural network that I hard coded. Now I want to hard code a deep neural network but I'm not sure what is a good dataset to use for this purpose. I'm looking to build a neural network with at least 5 hidden layers and just apply a dataset to it for the purpose of just learning and get a good intuition of how DNNs work by hard coding. Basically like a "hello world" version of a dataset for a Deep Neural Network. I was hoping ask this here to get some suggestions. If anyone of you know of any such datasets, request you to please let me know. Thank you for your time. Thanks, An
-
- dataset
- deep neural network
-
(and 2 more)
Tagged with:
-
A short summary from the blog my Meta Llama 3.2 Models: Meta is releasing new Llama 3.2 models, including small and medium-sized vision LLMs (11B and 90B) and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, with pre-trained and instruction-tuned versions available. Llama Stack Distributions: The company is also introducing Llama Stack distributions, a standardized interface for customizing Llama models and building agentic applications, with a simplified and consistent experience for developers across multiple environments. Enhanced Safety Features: Llama 3.2 includes new updates to the Llama Guard family of safeguards, designed to support responsible innovation and empower developers to build safe and responsible systems, with optimized Llama Guard models for on-device deployment.
-
Hope this will help someone learning about Embedded ML.
-
https://sites.google.com/g.harvard.edu/tinyml/home https://github.com/Mjrovai/UNIFEI-IESTI01-TinyML-2023.1?tab=readme-ov-file https://tinyml.seas.harvard.edu/courses/
- 1 reply
-
- 1
-
-
Welcome, @Pranav Kumar! Glad to have you onboard and look forward to your posts on NeuralNets 🙂