<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[O.D.T]]></title><description><![CDATA[O.D.T]]></description><link>https://horiyomi.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 10:37:19 GMT</lastBuildDate><atom:link href="https://horiyomi.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Introduction To How Kafka Works And Implementation Using Python-client]]></title><description><![CDATA[OUTLINE
i. A brief introduction to why we might consider Kafka for our business.
ii. We explain what Kafka is.
iii. A brief explanation on the why of Kafka.
iv. Highlighting why Kafka is so fast.
v. A brief mention of companies using Kafka.
vi. How t...]]></description><link>https://horiyomi.com/introduction-to-how-kafka-works-and-implementation-using-python-client</link><guid isPermaLink="true">https://horiyomi.com/introduction-to-how-kafka-works-and-implementation-using-python-client</guid><category><![CDATA[kafka]]></category><category><![CDATA[Python]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[realtime]]></category><dc:creator><![CDATA[Damilola Ogungbesan]]></dc:creator><pubDate>Sun, 14 Mar 2021 12:37:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1615501898735/NtD2Q8H_G.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>OUTLINE</strong></p>
<p>i. A brief introduction to why we might consider Kafka for our business.</p>
<p>ii. We explain what Kafka is.</p>
<p>iii. A brief explanation on the <strong>why</strong> of Kafka.</p>
<p>iv. Highlighting why Kafka is so fast.</p>
<p>v. A brief mention of companies using Kafka.</p>
<p>vi. How to get started with Kafka installations, components of Kafka and what they're responsible for.</p>
<p>vii. A walk through tutorial on Kafka</p>
<p>viii. Conclusion</p>
<hr />
<h1 id="setup-a-python-client-for-kafka-with-kafka-python"><strong>Setup a Python client for Kafka with kafka-python</strong></h1>
<p>Real-time data usage has become the new business order of the day both for 
businesses and their customers. However, one of the key factors to 
consider is how the business use case comes about their data for real-time
 usage i.e does the use case do more of writing data than they read, 
more of read than write or more of both and need to take actionable 
steps in real-time and in an event driven approach, here is where Apache Kafka comes in. We will be going over what Kafka is, Kafka concepts, who is using it, how to set it up and how to use it with a python client (<code>kafka-python</code>) in this tutorial.</p>
<p><strong>What is Apache Kafka?</strong></p>
<p>Kafka is an event streaming distributed messaging system which consists of 
servers and clients communicating over high-performance TCP network 
protocol.</p>
<p>PS: Kafka was developed at Linkedin but now managed under the Apache foundation hence the Apache Kafka. I will be referring to Apache Kafka as Kafka throughout this tutorial</p>
<p><strong>Event Streaming</strong></p>
<p>Event streaming is the capturing, processing and transforming of data in real-time to various events from different sources e.g website clicks, databases, logging systems, IOT devices e.t.c.</p>
<p>while ensuring continuous flow and routing stream data to various destinations anticipating the data from the event.</p>
<p><strong>Why Kafka?</strong></p>
<p>Kafka is used in real-time event streaming data architectures to provide real-time data analytics, messages are stored on disk with Kafka, providing intra-cluster replication thereby making messages more durable, more reliable and supporting multiple subscribers.</p>
<p>Kafka is able to continuously stream events by using 
publish-subscribe(pub-sub) model in that events can be read(subscribe) 
as soon as they are written(publish), processed or even stored for data 
retention over a period as Kafka gives the flexibility on how long to 
retain(store) the data.</p>
<p><strong>Why Is Kafka so fast?</strong></p>
<p>Kafka is fast for a number of reasons we will be highlighting some of this reasons below</p>
<ol>
<li>Zero-copy - It relies heavily on the <a target="_blank" href="https://en.wikipedia.org/wiki/Zero-copy">zero-copy</a> principle i.e it interacts directly with the OS kernel to move data.</li>
<li>Batching - It allows batching of data in chunks which enables efficient data compression thereby, reducing I/O latency.</li>
<li>Horizontal Scaling - Kafka allows for horizontal scaling as it allows for multiple partitions (even in thousands) on a topic which could be across
thousands of machines, either on premise or cloud makes it very capable
of high loads.</li>
<li>Avoidance of RAM - Kafka writes to an immutable commit log to the disk sequential thereby, avoiding slow disk seeking.</li>
</ol>
<p><strong>What Problem does Kafka Solve?</strong></p>
<p>With the rise of innovation in various aspects of life from the internet of 
things (IOT), self-driving cars, artificial intelligence, blockchain 
solutions,robotics and many more to mention a few, the rate of data 
generation is growing exponentially and it’s not slowing down anytime 
soon. Hence, for businesses to innovate and understand their customers 
more and provide better services, the traditional way of software 
development needs to be enhanced in order to incorporate inflow of this 
huge and growing datasets from various data sources including the 
aforementioned and others. With Kafka all various components of the 
system can communicate in an event driven approach where an event from 
one part of the system is translated to action in another part of the 
system the beauty of this is that it is going to be happening in real-time.</p>
<p><strong>What Companies Use Kafka?</strong></p>
<p>Thousands of companies are using Kafka in production including Fortune 500 
companies, some of the companies including Microsoft, Netflix, Goldman 
Sachs, Target, Cisco, Intuit,Box, Pinterest, New York times and many <a target="_blank" href="https://kafka.apache.org/powered-by">more</a>.</p>
<p><strong>Getting Started With Kafka.</strong></p>
<p>Kafka involves communication between servers and clients.</p>
<p><strong>Servers</strong>: Kafka runs as a cluster of one or more servers which could be located 
in one or multiple data centers on-premise or in cloud.</p>
<p><strong>Clients</strong>: Kafka clients allow us to write distributed system systems/applications that reads, 
writes and processes streams of events in a fault-tolerant approach in 
case of network or machine failure. The clients are available as REST APIs and in various programming languages including Java, Scala, Go, Python, C/C++ and many others. In this tutorial we will focus on using the python client.</p>
<p>There are several client we can use to communicate with Kafka</p>
<ol>
<li><p>Command line</p>
</li>
<li><p><a target="_blank" href="https://github.com/confluentinc/confluent-kafka-python">confluent-kafka</a></p>
</li>
<li><p>kafka-python (what we would be using)</p>
</li>
</ol>
<p><strong>Installation</strong>:</p>
<p><strong>STEP 1:</strong></p>
<p>Download Kafka from <a target="_blank" href="https://www.apache.org/dyn/closer.cgi?path=/kafka/2.7.0/kafka_2.13-2.7.0.tgz">here</a></p>
<p>Run <code>tar -xzf kafka_2.13-2.7.0.tgz</code></p>
<p>Run <code>cd kafka_2.13-2.7.0</code></p>
<p><strong>STEP 2:</strong></p>
<p><strong>NOTE</strong>: Your local environment must have Java 8+ installed.</p>
<p>Open a terminal and run this command:</p>
<p>Run <code>bin/zookeeper-server-start.sh config/zookeeper.properties</code></p>
<p>Open another terminal and run this command</p>
<p>Run <code>bin/kafka-server-start.sh config/server.properties</code></p>
<p><strong>STEP 3</strong>:</p>
<p>Creating a topic to store events</p>
<p>Run this command on another terminal</p>
<pre><code class="lang-bash">bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092`
</code></pre>
<p>Run this command to see the topic</p>
<pre><code class="lang-bash">bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
</code></pre>
<p>Which should return something like this</p>
<pre><code class="lang-bash">Topic:quickstart-events  PartitionCount:1    ReplicationFactor:1 Configs:

Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0
</code></pre>
<p>STEP 4:</p>
<p>Run this on your terminal to write an event to a topic</p>
<pre><code class="lang-bash">bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
</code></pre>
<p>STEP 5:</p>
<p>Run this on your terminal to read event from the topic</p>
<pre><code class="lang-bash">bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
</code></pre>
<p><strong>Zookeeper</strong> is a consistent file system for configuration information which Kafka 
uses in managing and coordinating clusters/brokers which includes leadership election for broker topics partition.</p>
<p><strong>Kafka broker</strong>: Kafka clusters are made up of multiple brokers, each broker having a unique id. Each broker containing topic logs partitions connecting one broker bootstrap client to the entire Kafka client.</p>
<p>With the steps highlighted above, we now have a running instance of Kafka on our machine. Before we continue, let’s get familiar with concepts of how Kafka works and the components it entails.</p>
<p><strong>Kafka Concepts</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1615242711686/n1-PKfvNN.png" alt="Screenshot_2021-02-26_at_20.06.15.png" /></p>
<p><strong>Events</strong>: It signifies something as happened i.e data is generated in a particular part of the system that we are interested thus a record/message is written to a designated topic. Hence, an event is recorded in a key, value and timestamp format for every event written.</p>
<p><strong>Topics</strong>:  Kafka topic partitioned across different buckets over various number of data centers
in across regions to ensure fault tolerance. It also ensures events are stored in the order they are written by appending new arriving events to the existing ones and are replicated across various partitions across different partitions. Note Each topic is identified by a topic name.</p>
<p><strong>Producers</strong>: Are client applications written in any of the available Kafka clients to solely write(publish) events  i.e messages/records to their designated topic which is identified by a topic name.
 They are written to be agnostic of the consumer i.e the producer is not
 aware of the consumer application it does one job and does it well 
writing of events to the topic.</p>
<p>Consumers: Are
 client applications for consuming events i.e messages/records in the 
order they arrived at a topic from specific topic.</p>
<p><strong>USING KAFKA-PYTHON</strong></p>
<p>For this tutorial, it’s assumed that you are familiar with python programming language and python virtual environments. We will be using pipenv as our virtual environment for this tutorial.And we would be using an open source  kafka python client called <a target="_blank" href="https://kafka-python.readthedocs.io/">kafka-python</a> github.</p>
<p>We would setup our virtual environment with pipenv by running this command <code>pipenv shell</code> and we install kafka-python with <code>pip install kafka-python</code>.</p>
<p>Before we proceed, we need to briefly looked at some key terms when working with <code>kafka-python</code> client.</p>
<h3 id="kafkaproducer"><code>KafkaProducer</code></h3>
<p><code>KafkaProducer</code> is the client responsible for publishing record to a Kafka cluster. It does this by calling the <strong>send</strong> method which is asynchronous and when called adds the record to a buffer of pending records, it returns immediately. Also, the producer automatically retry if the request fails unless it's configured otherwise which is one of the config that can be set.</p>
<p>Let's create a <code>KafkaProducer</code></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> kafka <span class="hljs-keyword">import</span> KafkaProducer

<span class="hljs-keyword">from</span> kafka.errors <span class="hljs-keyword">import</span> KafkaError

producer = KafkaProducer(bootstrap_servers=[<span class="hljs-string">'broker1:1234'</span>], retries=<span class="hljs-number">5</span>)

future = producer.send(<span class="hljs-string">'order-topic'</span>, <span class="hljs-string">b'item_name=Nike Air|item_id=1543|price=23000'</span>)

<span class="hljs-keyword">try</span>:

record_metadata = future.get(timeout=<span class="hljs-number">10</span>)

<span class="hljs-keyword">except</span> KafkaError:

<span class="hljs-comment"># handle exception appropriately</span>

log.exception()

<span class="hljs-keyword">pass</span>
</code></pre>
<p>Let's do quick walk through of what is going on in the above code snippet.</p>
<p><code>KafkaProducer</code> is the class used by <code>kafka-python</code> the python client to instantiate a connection to Kafka cluster.</p>
<p>bootstrap_servers is a list of host[:port] that the producer should contact to bootstrap initial cluster metadata.</p>
<p>We now send record from the producer by calling send method which takes argument of the topic-name which is a str in this case <strong>order-topic</strong>, the message, key, value, timestamp, and some other optional arguments.</p>
<p>Now to the synchronous flow, their could be errors perhaps the topic name was not found <code>kafka-python</code> client throw the <code>KafkaError</code> exception which we can handle and deal appropriately.</p>
<p>We could also send encoded records by using <code>msgpack</code> which will produce json messages. Here is what that would look like</p>
<pre><code class="lang-python">producer = KafkaProducer(value_serializer=msgpack.dumps)

producer.send(<span class="hljs-string">'order-topic'</span>, {<span class="hljs-string">'item_name'</span>: <span class="hljs-string">'Nike Air'</span>,<span class="hljs-string">'item_id'</span>:<span class="hljs-number">1543</span>,price: <span class="hljs-number">23000</span> })

<span class="hljs-comment"># produce json messages</span>

producer = KafkaProducer(value_serializer=<span class="hljs-keyword">lambda</span> m: json.dumps(m).encode(<span class="hljs-string">'ascii'</span>))

<span class="hljs-comment">#topic in json</span>

producer.send(<span class="hljs-string">'order-topic'</span>,  {<span class="hljs-string">'item_name'</span>: <span class="hljs-string">'Nike AirForce'</span>,<span class="hljs-string">'item_id'</span>:<span class="hljs-number">1583</span>,price: <span class="hljs-number">28500</span> })
</code></pre>
<p>PS: There are more config that can be set on the <code>KafakProducer</code> see the <a target="_blank" href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html">documentation</a> to view more configs that can be set.</p>
<h3 id="kafkaconsumer"><code>KafkaConsumer</code></h3>
<p>Consumer subscribe(reads) records from Kafka cluster. The consumer will transparently handle the failure of servers in the Kafka cluster, and adapt as topic-partitions are created or migrate between brokers.</p>
<p>Let's create Kafka Consumer</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> kafka <span class="hljs-keyword">import</span> KafkaConsumer
<span class="hljs-comment"># To consume latest messages and auto-commit offsets</span>

consumer = KafkaConsumer(<span class="hljs-string">'order-topic'</span>, group_id=<span class="hljs-string">'sample-group'</span>,  bootstrap_servers=[<span class="hljs-string">'localhost:9092'</span>])

<span class="hljs-keyword">for</span> message <span class="hljs-keyword">in</span> consumer:

<span class="hljs-comment"># message value and key are raw bytes -- decode if necessary!</span>

<span class="hljs-comment"># e.g., for unicode: `message.value.decode('utf-8')`</span>

<span class="hljs-keyword">print</span> (<span class="hljs-string">"%s:%d:%d: key=%s value=%s"</span> % (message.topic, message.partition, message.offset, message.key, message.value))
</code></pre>
<p>Let's walk through what's going on the Consumer code snippet</p>
<p><code>KafkaConsumer</code></p>
<p><code>bootstrap_servers</code> – ‘host[:port]’ string (or list of ‘host[:port]’ strings) that the consumer should contact to bootstrap initial cluster metadata.</p>
<p><code>group_id</code> -  Is the name of the consumer group that can be join dynamically if partition assignment is enabled, which is used for fetching and committing offsets.</p>
<p><code>value_deserializer</code>(callback) is any callable that takes a raw message value and returns a de-serialized value.</p>
<p>Various approaches of consuming record from a topic</p>
<pre><code class="lang-python"><span class="hljs-comment"># consume earliest available messages, don't commit offsets</span>

KafkaConsumer(auto_offset_reset=<span class="hljs-string">'earliest'</span>, enable_auto_commit=<span class="hljs-literal">False</span>) <span class="hljs-comment"># consume json messages</span>

KafkaConsumer(value_deserializer=<span class="hljs-keyword">lambda</span> m: json.loads(m.decode(<span class="hljs-string">'ascii'</span>))) <span class="hljs-comment"># consume msgpack KafkaConsumer(value_deserializer=msgpack.unpackb) # StopIteration if no message after 1sec KafkaConsumer(consumer_timeout_ms=1000)</span>
</code></pre>
<p><strong>Conclusion</strong></p>
<p>Phew!!, if you come this far i say thank you. We've only scratched the surface of what we can do with Kafka, there many more things that can be achieved by extending the arguments in both the <code>KafkaProducer</code> and <code>KafkaConsumer</code> from authentication using SSL, setting SSL certificate, adding new topic dynamically. We can explore more config from the <code>kafka-python</code> <a target="_blank" href="https://kafka-python.readthedocs.io/en/master/usage.html">documentation</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started With Django And It's Anatomy (Part 1)]]></title><description><![CDATA[What is Django
Django is a matured Python batteries-included web framework and it's been battle tested over the years with even most popular service you most likely must have used before or the one you still use everyday. To mention a few of them are...]]></description><link>https://horiyomi.com/getting-started-with-django-and-its-anatomy-part-1-1</link><guid isPermaLink="true">https://horiyomi.com/getting-started-with-django-and-its-anatomy-part-1-1</guid><category><![CDATA[Python 3]]></category><category><![CDATA[Django]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[software development]]></category><category><![CDATA[full stack]]></category><dc:creator><![CDATA[Damilola Ogungbesan]]></dc:creator><pubDate>Wed, 20 Jan 2021 14:16:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1611098341772/tf1sKmbdI.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="what-is-django">What is Django</h1>
<p>Django is a matured Python batteries-included web framework and it's been battle tested over the years with even most popular service you most likely must have used before or the one you still use everyday. To mention a few of them are Spotify, Bitbucket, Robinhood, Instagram, Coursera, Udemy and many more.</p>
<h3 id="what-will-be-covering">What will be covering:</h3>
<ul>
<li>Django's Anatomy </li>
<li>Python Installation</li>
<li>Setting Up Virtual Environment</li>
<li>Installing Django </li>
<li>Setting Up a Django Project</li>
<li>Summary </li>
<li>Conclusion</li>
</ul>
<h2 id="djangos-anatomy">Django's Anatomy</h2>
<p>Like every batteries-included framework Django uses the Models-&gt;Views-&gt;Urls-&gt;Templates pattern for structure. I'll briefly explain how this all come together.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611096549474/YXDUsKMNN.png" alt="Screenshot 2021-01-19 at 23.01.55.png" /></p>
<p><strong>Models (models.py)</strong> It's responsible for modeling the table and its fields in a database</p>
<p><strong>Views (views.py)</strong> It's were the business logic resides and processing of data retrieved from the database, writing of new ones, computing data to be sent to a template.</p>
<p><strong>Urls (urls.py)</strong> It's contains urls mapping to views that will process the data expected to be rendered on registered urls.</p>
<p><strong>Templates</strong> is responsible for the HTML which the browser renders in response to various requested urls.</p>
<h2 id="python-installation">Python Installation.</h2>
<p>You need to download <a target="_blank" href="https://www.python.org/downloads/">Python</a> for your particular operating system follow the installation wizard and you should be all set.
To confirm if Python as been installed successfully, open up terminal or command line (Windows Users) type 
<code>python --version</code> respectively. You should see something like this <code>Python 3.7.6</code>.Your version might be different by the time you are reading this. This just proves that there is Python available on your machine.</p>
<h2 id="setting-up-virtual-environment">Setting Up Virtual Environment</h2>
<blockquote>
<p>A virtual environment is a tool that helps to keep dependencies required by different projects separate by creating isolated Python virtual environments for them.
There are different types of virtual environments but i'll be using one called Pipenv. Pipenv is just my personal preference, you can always lookup another and use in place of Pipenv, as all you will be discusing in this post will also work with other types of virtual environment.</p>
</blockquote>
<h4 id="install-pipenv">Install Pipenv</h4>
<p>Installation is quite straight forward, with Python whose installation steps was shown earlier, it also installs a tool called <strong>pip</strong> which is use for installing python packages from <a target="_blank" href="https://pypi.org/">PyPi</a> which is a package registry in the Python ecosystem. Run this <code>pip install pipenv</code> to install Pipenv on your machine.</p>
<p>Now the system requirement to write a Django application is now ready. Let's get into setting up Django</p>
<blockquote>
<p>Using Virtual Environment is optional, but it is the recommended way to setup your project </p>
</blockquote>
<h2 id="installing-django">Installing Django</h2>
<p>As mentioned earlier about virtual environment, hence you need to setup a virtual environment for our Django project</p>
<ol>
<li><p>Run <code>mkdir django-sample-app &amp;&amp; cd django-sample-app</code>. This will create a new directory for the project called <strong>django-sample-app</strong> and change to the directory.</p>
</li>
<li><p>Run <code>pipenv shell</code> to initiate the virtual environment with pipenv. This will create an isolated environment for the dependencies of our project. </p>
</li>
<li><p>Hence, you will be installing our dependency Django with <code>pipenv install django</code> at the time of this writing the current version of Django at <code>3.1</code>.</p>
</li>
</ol>
<h2 id="setting-up-a-django-project">Setting Up A Django Project</h2>
<p>To setup a Django project, you need to run a Django command 
<code>django-admin startproject sample-app</code>.  Your file structure should look like this </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611092228085/iCNhEVV7j.png" alt="Screenshot 2021-01-19 at 22.36.48.png" /></p>
<p>You now have a Django project setup, let's take a moment to explain what is going on with all the commands you've been typing.</p>
<p>The Django command <code>django-admin starproject sample-app</code> does exactly what you're thinking, it create a new Django project with the name <em>sample-app</em>. Let's discuss what the generated files from the earlier command do.</p>
<ul>
<li><p><code>asgi.py</code> this file container setup for development asynchronous server setup for Django to work in asynchronously which still instill in it's early stage.</p>
</li>
<li><p><code>settings.py</code> this file contains all the basic configuration to have your Django app running which is very extendable and highly configurable to your project requirements.</p>
</li>
<li><p><code>urls.py</code> this file contains the url mapping to different business logic.</p>
</li>
<li><p><code>wsgi.py</code> this file contains setup for development synchronous server setup for Django.</p>
</li>
<li><p><code>manage.py</code> this file contains the wiring up of all the basic setup to ensure Django is installed and it runs.</p>
</li>
</ul>
<p>With Django there is a concept of <em>apps</em> which is a directory, and it contains specific business logic (if you may). Hence, Django can have multiple apps in one project. To create an app run <code>python manage.py startapp &lt;app_name&gt;</code>. We will be creating an app called <em>samples</em> <code>python manage.py startapp samples</code>.  Your file structure should look this </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611095012848/DIkfp0uLx.png" alt="Screenshot 2021-01-19 at 23.21.22.png" /></p>
<p>Next you will need to open our <code>settings.py</code> file to install the just created app <em>samples</em>. Scroll to  <code>INSTALLED_APPS</code>  this is a list of strings where the strings are apps names pertaining to the a project, which includes both Django batteries apps and others that will be created in the project; after adding the new app <em>samples</em> it should look like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611095307484/NoqeTq0zJ.png" alt="Screenshot 2021-01-19 at 23.23.54.png" /></p>
<blockquote>
<p>Phewww...... Are we there yet?</p>
</blockquote>
<p>Almost..., let's see what it looks like on the browser after all aren't we trying to create a web application ):</p>
<p>To run the application you would run <code>python manage.py runserver</code> it will start up the <em>wsgi</em> development sever and will be running on <code>127.0.0.1:8000</code>. Open this url on your browser and you should something like this </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1611096207071/GLpsj5MjB.png" alt="Screenshot 2021-01-19 at 23.39.55.png" /></p>
<blockquote>
<p>PS: you don't have to install a django app before you can run <code>python manage.py runserver</code> in a Django project.</p>
</blockquote>
<p><strong>Summary</strong> </p>
<p><code>pipenv shell</code> - Creates virtual environment using Pipenv</p>
<p><code>pipenv install &lt;package_name&gt;</code> - It install package using Pipenv </p>
<p><code>django-admin startprojects &lt;project_name&gt;</code> generate a new django project with the specified name.</p>
<p><code>python manage.py startapp &lt;app_name&gt;</code> generate a new app to be installed in settings with the app name.</p>
<p><code>python manage.py runserver</code> start django developments server</p>
<p>Congratulations!!!,  if you made it this far i say big thank you and i'm convinced you're interested in Django development, stay tuned for the Part 2. Happy Hacking!!!!</p>
<p>Reference:</p>
<p>Official Django Documentation
https://docs.djangoproject.com/en/3.1/</p>
<p>https://realpython.com/pipenv-guide/</p>
<p>https://www.geeksforgeeks.org/python-virtual-environment/</p>
]]></content:encoded></item><item><title><![CDATA[The Scala Programming Language And Why Should Care]]></title><description><![CDATA[WHAT IS SCALA?
Scala is a general-purpose programming language, i.e it can be an object-oriented and/or functional programming language. 
It's built on top of the JVM(Java Virtual Machine)....yes that's right, are you thinking doesn't that mean Java?...]]></description><link>https://horiyomi.com/the-scala-programming-language-and-why-should-care</link><guid isPermaLink="true">https://horiyomi.com/the-scala-programming-language-and-why-should-care</guid><category><![CDATA[Scala]]></category><category><![CDATA[Beginner Developers]]></category><dc:creator><![CDATA[Damilola Ogungbesan]]></dc:creator><pubDate>Thu, 19 Nov 2020 23:45:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1605826792328/mkymte-md.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="what-is-scala">WHAT IS SCALA?</h1>
<p>Scala is a general-purpose programming language, i.e it can be an object-oriented and/or functional programming language. 
It's built on top of the JVM(Java Virtual Machine)....yes that's right, are you thinking doesn't that mean Java? </p>
<p>Well, yes and no. </p>
<blockquote>
<p>Scala source code is intended to be compiled to Java bytecode, so that the resulting executable code runs on a Java virtual machine.</p>
</blockquote>
<p>Which make is how it achieves interoperability, it's simply means you can also write Java(for the you Java gurus) in a Scala code base. Also, it will allow using of libraries written in either language to be used referenced in Java or Scala code.
I know you're thinking what a time to be alive right, there is more, 
Scala is concise i.e Scala code is less verbose compare to Java.</p>
<h1 id="why-should-you-care">WHY SHOULD YOU CARE?</h1>
<p>Scala as and is still taking the data (highly in big data) world by storm, ranging from tools and frameworks which are state of the art for the current data engineering, data pipe-lining,machine learning ecosystem and beyond. To mention a few of this tools are Apache Spark, Kafka, Akka, Flink e.t.c</p>
<p>Hence, since a lot of these tools are built using Scala, it's will be very valuable to add Scala to your tools set arsenal.</p>
<h3 id="some-companies-using-scala">Some companies using Scala</h3>
<ul>
<li>Netflix</li>
<li>IBM</li>
<li>Twitter</li>
<li>Airbnb</li>
<li>LinkedIn</li>
<li>Foursquare </li>
<li>Verizon
and a lot more.</li>
</ul>
<p>If you've come this far, i'd like to think you're really thrilled to want to get started writing some Scala code.</p>
<h1 id="how-to-setup-scala-on-your-machine">HOW TO SETUP SCALA ON YOUR MACHINE</h1>
<h3 id="for-mac-users">For Mac users</h3>
<p><code>brew updateCopy</code>
<code>brew install scala</code></p>
<p>You can verify installation by running <code>scala -version</code> on your terminal</p>
<h3 id="for-linux-user">For Linux user</h3>
<p><code>sudo apt-get install scalaCopy</code></p>
<h3 id="for-windows-users">For Windows users</h3>
<p>You can install with this <a target="_blank" href="https://downloads.lightbend.com/scala/2.13.1/scala-2.13.1.msi">link</a> download it. Follow the setup instruction and you should be fine.</p>
<h3 id="some-supported-texteditor-and-ide">Some supported TextEditor and IDE</h3>
<ul>
<li>Vscode</li>
<li>Intellij</li>
<li>Eclipse</li>
</ul>
<p>For some getting started tutorial you can checkout the <a target="_blank" href="https://docs.scala-lang.org/overviews/scala-book/introduction.html">Scala Book</a>.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Scala is a high demand skillset with low available proficient developers in it. At the rate in which data is been generated around the world from websites, mobile apps, IOT devices, cars and many more, the need for Scala developers will be increasing much more but there are very few around the world compare to the numbers that are and will be needed. </p>
<h2 id="reference">Reference</h2>
<p>https://docs.scala-lang.org/getting-started/index.html</p>
<p>https://www.coresumo.com/how-to-install-scala-on-ubuntu-linux-or-install-scala-on-mac-or-windows-10-or-setup-scala-3-0/</p>
<p>https://en.wikipedia.org/wiki/Scala_(programming_language)</p>
]]></content:encoded></item></channel></rss>