Version Controlling System
Version
Controlling System
is coming to use When, Software System grow larger and increase their complexity.
Because of that it is not easy to develop Huge Software System. This VCS came
up with a solution to this problem.
Version Controlling System Manage the source of different versions at different stages.
Version controlling system can,
•Collaboration
With a version controlling system, everybody on the team is able to work absolutely freely - on any file at any time
With a version controlling system, everybody on the team is able to work absolutely freely - on any file at any time
The
version controlling system will later
allow you to merge
all the changes into a common version
•Storing
versions properly
A version
control system acknowledges that there is only one project
•Restoring
previous versions
•Understanding
what happened
Every time
you save a new version of your project, your version controlling system requires you to provide a short description of what was
changed
•Backup
Version controlling system models are,
Local Version Control
Systems
Centralized
Version Control Systems
Distributed
Version Control Systems
Local Version Control System
is maintains track
of files within the local system. This approach is very common and simple. This
type is also error prone which means the chances of accidentally writing to the
wrong file is higher.
This is an
oldest Version Controlling System and this model cannot be used for collaborative software development.
Centralized Version Control Systems means that the version history is
stored in a central server. When a developer wants to make changes to certain
files, they pull files from that central repository to their own computer.
After the developer has made changes, they send the changed files back to the
central repository.
This version control system can be
used for collaborative software development and administrators have fine
grained control over who can do what. Disadvantage of this system is the single point of
failure that the centralized server represents
Distributed
Version Control Systems is a form of
version control where the complete codebase including its full history is
mirrored on every developer's computer. This allows branching and merging to be
managed automatically, increases speeds of most operations (except for pushing
and pulling), improves the ability to work offline, and does not rely on a
single location for backups. Distributed
revision control systems (DVCS) takes a peer-to-peer approach to version
control, as opposed to the client – server approach of centralized systems.
This System can be,
- Can collaborate with different groups of people in different ways simultaneously within the same project.
- If any server dies, and these systems were collaborating via it, any of the client repositories can be copied back.
Git & GitHub are they same …?
NO. Git is a revision control system, a tool
to manage your source code history. GitHub is a hosting service for Git
repositories. So they are not the same thing: Git is the tool, GitHub
is the service for projects that use Git.
Git Vs GitHub comparision
Git
|
GitHub
|
Installed Locally
|
Hosted in the
cloud
|
Maintained by
Linux
|
Purchased in
Microsoft
|
Primarily in
cmd-line tool
|
Administrated
throw the web
|
Open source licensed
|
Free tire and
pay-for-user tires
|
No user management
features
|
Build in user
management
|
Git Commit vs Push
Commit changes locally and Push changes to a remote repository.ok we discuss above things.
When you commit your changes, you save the changes as a single
logical set in your local repository. You can do this multiple times without pushing.
Until they are pushed, they do not leave your local repository meaning the
remote repository won't have these sets of changes yet, so when other people
pull from the remote repository, your commits won't be pulled.
When you push, all the
commits you made in your local repository will be transferred to the remote
repository, so when other developers who share this remote repository pull,
they will have your changes transferred to their local repositories.
Use of staging area and Git directory
Git directory is where Git stores the
metadata and object database for your project. This is the most important part
of Git, and it is what is copied when you clone a repository from another
computer.
The working directory is a single
checkout of one version of the project. These files are pulled out of the
compressed database in the Git directory and placed on disk for you to use or
modify.
The staging area is a simple file,
generally contained in your Git directory that stores information about what
will go into your next commit. It’s sometimes referred to as the index, but
it’s becoming standard to refer to it as the staging area.
If
a particular version of a file is in the Git directory, it’s considered
committed. If it’s modified but has been added to the staging area, it is
staged. And if it was changed since it was checked out but has not been staged,
it is modified.
Collaboration
workflow of Git
Git
Workflow is a recipe or recommendation for how to use Git to accomplish work in
a consistent and productive manner. Git workflows encourage users to leverage
Git effectively and consistently. Git offers a lot of flexibility in how users
manage changes. Given Git's focus on flexibility, there is no standardized
process on how to interact with Git. When working with a team on a Git managed
project, it’s important to make sure the team is all in agreement on how the
flow of changes will be applied.
Let’s take a general example at how a typical small team would
collaborate using this workflow. We’ll see how two developers, Amal and Kamal,
can work on separate features and share their contributions via a centralized
repository.
Amal works on his feature
In Amal local repository, Amal can develop
features using the standard Git commit process: edit, stage, and commit. That
since these commands create local commits, Amal can repeat this process as many
times as he wants without worrying about what’s going on in the central
repository.
Kamal works on
her feature
Meanwhile,
Kamal is working on her own feature in his own local repository using the same
edit/stage/commit process. Like Amal, He doesn’t care what’s going on in the
central repository, and He really doesn’t care what Amal is doing in his local
repository, since all local repositories are private.
Amal publishes
his feature
Amal finishes his feature, he should
publish his local commits to the central repository so other team members can
access it. He can do this with the git push command, like so:
-
git push origin master
Remember that origin is the remote
connection to the central repository that Git created when Amal cloned it. The master
argument tells Git to try to make the origin’s master branch look like his
local master branch. Since the central repository hasn’t been updated since Amal
cloned it, this won’t result in any conflicts and the push will work as
expected.
Kamal tries to publish her feature
- git
push
origin
master
But, since his
local history has diverged from the central repository, Git will refuse the
request with a rather verbose error message:error: failed
to push
some refs
to '/path/
to/repo.git'
hint: Updates were rejected because
the tip
of your current branch
is
behind
hint:
its remote counterpart. Merge
the remote changes (e.g. 'git pull')
hint:
before pushing again.
hint: See
the 'Note
about fast-forwards'
in 'git push
--help' for details.
This prevents Kamal from overwriting official commits. He needs to pull Amal’s updates into her repository, integrate them with her local changes, and then try again.
Kamal rebases on top of John’s commits
Kamal can use git pull to
incorporate upstream changes into his repository. This command is sort of like svn
update—it pulls the entire upstream commit history into Kamal’s local
repository and tries to integrate it with his local commits:
git pull -- rebase origin master
The --rebase option tells Git to
move all of Kamal’s commits to the tip of the master branch after synchronizing
it with the changes from the central repository.
Mary resolves a merge conflict
master
branch one at a
time. This means that you catch merge conflicts on a commit-by-commit basis
rather than resolving all of them in one massive merge commit. This keeps your
commits as focused as possible and makes for a clean project history. In turn,
this makes it much easier to figure out where bugs were introduced and, if
necessary, to roll back changes with minimal impact on the project.If Kamal and Amal are working on unrelated features, it’s unlikely that the rebasing process will generate conflicts. But if it does, Git will pause the rebase at the current commit and output the following message, along with some relevant instructions:
CONFLICT (content): Merge
conflict
in <
some-file>
The great thing about Git is that anyone can resolve their own merge conflicts. In our example, Kamal would simply run a git status
to see where the problem is. Conflicted files will appear in the unmerged paths section:
Then, he’ll
edit the file(s) to his liking. Once he’s happy with the result, he can stage
the file(s) in the usual fashion and let git rebase
do the rest:git add <
some-
file>
git add <
some-
file>
git rebase
--continue
And that’s
all there is to it. Git will move on to the next commit and repeat the process
for any other commits that generate conflicts.If you get to this point and realize and you have no idea what’s going on, don’t panic. Just execute the following command and you’ll be right back to where you started:
git rebase
–abort
Mary successfully publishes her feature
After he’s
done synchronizing with the central repository, Kamal will be able to publish
her changes successfully:git
push
origin
master
Content Delivery Network Benefits
Content delivery network is a system of servers
deployed in different geographical locations to handle increased traffic loads
and reduce the time of content delivery for the user from servers. The main
objective of CDN is to deliver content at top speed to users in different
geographic locations and this is done by a process of replication. CDNs provide
web content services by duplicating content from other servers and directing it
to users from the nearest data center. The shortest possible route between a
user and the web server is determined by the CDN based on factors such as
speed, latency, proximity, availability and so on. CDNs are deployed in data
centers to handle challenges with user requests and content routing.
§ Improving
website load times- By distributing content closer to website visitors by using
a nearby CDN server (among other optimizations), visitors experience faster
page loading times. As visitors are more inclined to click away from a
slow-loading site, a CDN can reduce bounce rates and increase the amount of
time that people spend on the site. In other words, a faster a website means
more visitors will stay and stick around longer.
§ Reducing
bandwidth costs- Bandwidth consumption costs for website hosting is a primary
expense for websites. Through caching and other optimizations, CDNs are able to
reduce the amount of data an origin server must provide, thus reducing hosting
costs for website owners.
§ Increasing
content availability and redundancy- Large amounts of traffic or hardware
failures can interrupt normal website function. Thanks to their distributed
nature, a CDN can handle more traffic and withstand hardware failure better
than many origin servers.
§ Improving
website security- A CDN may improve security by providing DDoS mitigation,
improvements to security certificates, and other optimizations.
Differences between CDNs and Web Hosting
1.
Web Hosting is
used to host your website on a server and let users access it over the
internet. A content delivery network is about speeding
up the access/delivery of your website’s assets to those users.
2.
Traditional web
hosting would deliver 100% of your content to the user. If they are located
across the world, the user still must wait for the data to be retrieved from
where your web server is located. A CDN takes a majority of your static and
dynamic content and serves it from across the
globe, decreasing download times. Most times, the closer the
CDN server is to the web visitor, the faster assets will load for them.
3.
Web Hosting
normally refers to one server. A content delivery network refers to a global
network of edge servers which distributes your content from
a multi-host environment.
When choosing the right CDN service for your website, you can choose between paid CDN services and public CDN services, which are free of charge. Both types of services have its advantages and disadvantages, but it’s important to choose the one that will best fit your needs.
Free public CDN services
The best free public CDN services are,
§ Google CDN,
§ Microsoft CDN,
§ jsDelivr CDN,
§ cdnjs CDN,
§ jQuery CDN.
There have been a lot of
changes in the CDN vendor landscape over the past few months, so here’s an
updated list of all the vendors I am tracking. They are broken out by vendors
that offer commercial CDN services to content owners, and vendors that offer CDN
platforms for MSO and carriers.
Commercial
CDNs (sell to content owners and publishers)
- Akamai
- Amazon
- CDNetworks
- CDN77
- ChinaCache (focused primarily in China)
- ChinaNetCenter (focused primarily in China)
- Comcast
- Fastly
- Google Cloud
- Instart Logic
- Level 3
- Limelight Networks
- Microsoft Azure (has no CDN, resell Verizon and Akamai)
Following things are
needed to Virtualization
v Hardware
virtualization
• VMs, emulators
vOS
level virtualization (Desktop
virtualization)
• Remote
desktop terminals
vApplication
level virtualization
•Runtimes (JRE/JVM, .NET), engines (games
engines)
vContainerization
•Dockers
vOther
virtualization type
•Database, network, storage, etc.
pros vs cons virtualization
Pros virtualization
Ø Reduced costs
Ø Automation
Ø Backup and recovery
Ø Efficient resource utilization
Cons virtualization
Ø High upfront costs
Ø Security
Ø Time spent learning
Ø The upfront costs are hefty
Ø Not all hardware or software can be virtualized
Best
Data Visualization Tools Available are,
- Tableau. Tableau is often regarded as the grand master of data visualization software and for good reason.
- Qlikview. Qlik with their Qlikview tool is the other major player in this space and Tableau's biggest competitor.
- FusionCharts.
- Highcharts.
- Datawrapper.
- Sisense.
What is the hypervisor
and what is the role of it?
A hypervisor is a hardware virtualization
technique that allows multiple guest operating systems (OS) to run on a single
host system at the same time. The guest OS shares the hardware of the host
computer, such that each OS appears to have its own processor, memory and other
hardware resources. A hypervisor is also known as a virtual machine manager.
Emulation and virtualization carry many similarities, yet they have distinct operational differences. If you’re looking to access an older operating system within a newer architecture, emulation would be your preferred route. Conversely, virtualized systems act independent of the underlying hardware. We’ll look to separate these often confused terms, and describe what each of them mean for business IT operations.
What’s the difference?
Emulation, in short, involves making one system imitate another. For example, if a piece of software runs on system A and not on system B, we make system B “emulate” the working of system A. The software then runs on an emulation of system A.
In this same example, virtualization would involve taking system A and splitting it into two servers, B and C. Both of these “virtual” servers are independent software containers, having their own access to software based resources – CPU, RAM, storage and networking – and can be rebooted independently. They behave exactly like real hardware, and an application or another computer would not be able to tell the difference.
Each of these technologies have their own uses, benefits and shortcomings.
Emulation
In our emulation example, software fills in for hardware – creating an environment that behaves in a hardware-like manner. This takes a toll on the processor by allocating cycles to the emulation process – cycles that would instead be utilized executing calculations. Thus, a large part of the CPU muscle is expended in creating this environment.
Interestingly enough, you can run a virtual server in an emulated environment. So, if emulation is such a waste of resources, why consider it?
Emulation can be effectively utilized in the following scenarios:
• Running an operating system meant for other hardware (e.g., Mac software on a PC; console-based games on a computer)
• Running software meant for another operating system (running Mac-specific software on a PC and vice versa)
• Running legacy software after comparable hardware become obsolete
Emulation is also useful when designing software for multiple systems. The coding can be done on a single machine, and the application can be run in emulations of multiple operating systems, all running simultaneously in their own windows.
Virtualization
In our virtualization example, we can safely say that it utilizes computing resources in an efficient, functional manner – independent of their physical location or layout. A fast machine with ample RAM and sufficient storage can be split into multiple servers, each with a pool of resources. That single machine, ordinarily deployed as a single server, could then host a company’s web and email server. Computing resources that were previously underutilized can now be used to full potential. This can help drastically cut down costs.
While emulated environments require a software bridge to interact with the hardware, virtualization accesses hardware directly. However, despite being the overall faster option, virtualization is limited to running software that was already capable of running on the underlying hardware. The clearest benefits of virtualization include:
•Wide compatibility with existing x86 CPU architecture
•Ability to appear as physical devices to all hardware and software
•Self-contained in each instance
VMs and containers
What are VMs?
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.The operating systems (“OS”) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.
Since the advent of affordable virtualization technology and cloud computing services, IT departments large and small have embraced virtual machines (VMs) as a way to lower costs and increase efficiencies.
Benefits of VMs
- All OS resources available to apps
- Established management tools
- Established security tools
- Better known security controls
What are Containers?
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.Containers sit on top of a physical server and its host OS typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.
In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment.
Docker is a type of container like as Linux Containers (LXC)
Docker
started as a project to build single-application LXC containers, introducing
several changes to LXC that make containers more portable and flexible to use.
It later morphed into its own container runtime environment. At a high level,
Docker is a Linux utility that can efficiently create, ship, and run containers.
Benefits of Containers
- Reduced IT management resources
- Reduced size of snapshots
- Quicker spinning up apps
- Reduced & simplified security updates
- Less code to transfer, migrate, upload workloads
Different between VMs vs Containers
Comments
Post a Comment