Today's world is moving fastly towards using AI and Machine Learning in all fields. The most important key to this is DATA. Data is the key to everything. If as a Machine Learning Engineer we are able to understand and restructure the data toward our need, we would have completed half the task.
Let us try to learn to perform EDA (Exploratory Data Analysis) on data.
What we will learn in this tutorial :
Let's get started. We will try to fetch some sample data…
The YOLOv4 has several requirements with one primitive one of them for training a custom object detector is OpenCV which has to be installed from source and not via pip.
To find more about the requirements of YOLOv4 check out the official repository here.
Let's get started.
The requirement as given officially is OpenCV >= 2.4.
sudo apt-get install cmake
sudo apt-get install gcc g++
2. Numpy is needed for OpenCV as a dependency and can be downloaded for python3 with command
sudo apt-get install python3-dev python3-numpy
This blog has references from a lot of researchers and repositories out there which are listed all through the article. Special Thanks to them !!
Yolo is a State of the Art architecture for object detection, with the latest version YOLOv4 which aims at optimal speed and accuracy and can be found here. It is possible to train our own custom object detector for some detection of particular objects like car, person, etc using the YOLOv4 architecture.
Data Scientists or even Python Software Engineers commonly need to use python virtual environments for using dependencies, apart from those installed in their system.
Python provides the option to create virtual environments without installing any software or additional tools.
Let's get started with the following steps:
This will show all active dependencies with their versions.
2. In order to create the new virtual environment enter the command:
python -m venv project_env
This will create a new virtual environment named “project_env”.
I was unaware of the fact that you could add a profile introduction to your GitHub account, which enhances your Github account in a really well-organized manner.
This feature is really easy to use and makes your profile really interactive.
There is a growing trend towards NoSQL Databases like MongoDB, Cassandra, Redis, etc. But what are these and how do they differ from traditional MySQL?
What do we learn in this Blog?
Let us get started: Introduction
What is MongoDB :
SIFT stands for Scale-Invariant Feature Transform. This is used for comparison of images to check for similarities.
We will check this out step by step with codes and open CV.
Let's first check which images are we going to inspect using SIFT.
This is a continuation of my earlier repository where I use 1D LSTM for Autoencoder and Anomaly Detection which can be found here.
Here I extend the topic to LSTM Autoencoder for 2D Data. Creating a separate post about this as LSTM tends to become really tricky when speaking of inputs.
Let's see some codes for understanding 2D LSTM Autoencoders.
Autoencoder has several applications like :
We are going to see the third application in very simple time-series data. The concept of Autoencoders can be applied to any Neural Network Architecture like DNN, LSTM, RNN, etc. Since we have time-series data we are going to design an LSTM Autoencoder.
The concept is simple we will train an autoencoder with simple (non-anomaly data). By doing this the Autoencoder can reconstruct back a known input sequence. And then when we have a sequence with has anomalies and are fed to…
This is an example to get started with Series data reconstruction with LSTM AUTOENCODERS.
Let us Directly dive into the code base as there are several videos and blogs available telling in-depth about what LSTM and Autoencoders are.