Sunday, 6 September 2015

Running Graphics Program of C on Windows 7 64 bits In CodeBlocks IDE 13.12


Today we are going to see how to run C graphics program on 64 bits Windows 7 using CodeBlocks 13.12. 

It will require only 12 steps. Follow with caution and you are done.

These are the instructions I have tried for 64 bits windows 7:
  1. Download zip file from this repository: https://github.com/stahta01/windows-games
    This one is having latest files of Graphics Libraries.

  2. Now either you are using external GCC compiler or internal GCC compiler doesn’t matter, just follow the universal procedure recommended for this.
  3. Copy the following files 
From(windows-games-master\WinBGIm)
To (MinGW)
include folder  => graphics.h
Include
src folder => Winbgim.h
Include
lib folder => libbgi.a
Lib

           If any of the files are already at destination, please overwrite as these are going to work for 64
           bits versions.

4.  Now, go to Setting => Compile and Debugger => Selected Compiler => GNU GCC Compiler.
           -  I will recommend to make copy of existing compiler and rename it. So that any kind of mess will not
           disturb other type of program execution. 



     5.    Now in Linker settings => Add => File => C:\MinGW32\lib\libbgi.a
      
     6.    Go to right part, other settings paste these parameters:
           -lbgi -lgdi32 -lcomdlg32 -luuid -loleaut32 -lole32
       
     7. Now half of the configuration has been done.

          
     8. Create simple empty project using codeblocks and add file with extension .cpp.
  
     9.     Most IMPORTANT part is now:

      
Now, right click on Project Name => Build Options => selected Compiler.

       You have to select the compiler which you have configured.
       Otherwise you will get:
       i. Memory access violation error "Process-returned-1073741819-0xc0000005" 
               or 
      ii. Segmentation fault.
  
     After successfully completing above steps, try the following program with Hello.cpp   
  
     This is my sample program to print name of screen:

 #include<graphics.h>  
 #include<conio.h>  
 main()  
 {  
   int gd = DETECT,gm,left=100,top=100,right=200,bottom=200,x= 300,y=150,radius=50;  
   initgraph(&gd, &gm, "C:\\TC\\BGI");  
   rectangle(left, top, right, bottom);  
   circle(x, y, radius);  
   bar(left + 300, top, right + 300, bottom);  
   line(left - 10, top + 150, left + 410, top + 150);  
   ellipse(x, y + 200, 0, 360, 100, 50);  
   outtextxy(left + 100, top + 325, "My First C Graphics Program");  
   getch();  
   closegraph();  
   return 0;  
 }  


    10.  Now, rebuild the project.

    11.  After that, compile and run.

    12.  You will definitely see such message printed,



Thursday, 25 June 2015

Wireless Connectivity in CentOS 6.x

Hi,

Banging your head on wall for not able to connect with Free wifi in campus or coffee shop?

Its time to end it.

Just two commands and you are done.


For CentOS 6/RedHat 6:

# rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

Then install Ralink NIC adapter:

# yum install kmod-r8168

It will take sometime to show its action (even for 8GB RAM & 1TB system ;)

After some installation instructions, I hope you will see wireless connection in Linux !!

#CentOS6 #CentOS7 #Redhat #Ralink #Wireless #Wifi

Installation of Linux Kernel and Run Hello World module in Easy Steps

I have observed people like me struggled very much while writing their first Linux kernel module. Most of them leave in the middle due to a lot of hectic. For them, I am writing in simple English how to write you first LKM i.e. Linux Kernel Module.

Following is the prerequisite of  LKM:

For Fedora/CentOS:

Before starting the kernel compilation, just make sure the kernel version you are having. 
 
# uname -r
 
3.17.4-301.fc21.x86_64

Now, with the same result append it with following command like:
 
# yum install kernel-devel-3.17.4-301.fc21.x86_64

and this will install the package for your current kernel version.

For Ubuntu/Debian (On experimentation basis):

# apt-get install build-essential linux-headers-$(uname -r)


additional packages, you require:

For Fedora/Redhat/CentOS

# yum install gcc           -  To  compile the make file
# yum install rsyslog     -  To see logs of result

For Ubuntu/Debian
# apt-get install gcc          
# apt-get install rsyslog

Remember to do Kernel related operation most possibly as non-root user.

Create a directory, say, LKP

$ cd Documents/LKP/

$ touch hello.c

The hello.c should be like this:

 #include <linux/module.h>    /* Needed by all modules */  
 #include <linux/kernel.h>    /* Needed for KERN_LOGLEVEL_MESSAGES */  
 #include <linux/init.h>     /* Needed for the macros */  
 MODULE_LICENSE("GPL");  
 MODULE_AUTHOR("Mayur Patil");  
 MODULE_DESCRIPTION("Hello World module");  
 MODULE_VERSION("v0.1");  
 static int __init hello_start(void){  
 printk(KERN_INFO "Loading hello module...\n");  
 printk(KERN_INFO "Hello world\n");  
 return 0;  
 }  
 static void __exit hello_end(void){  
 printk(KERN_DEBUG "End of Hello World Kernel Module\n");  
 printk(KERN_DEBUG "DEBUG IS SUCCESSFUL\n");  
 }  
 module_init(hello_start);  
 module_exit(hello_end);  



Be careful regarding the "Makefile" (yes, the name must be as it is as I typed)

Next Important thing is give tabs rather than spaces.

ifneq ($(KERNELRELEASE),) 
obj-m := hello.o
else
<tab>KERNEL_SOURCE := /usr/src/kernels/3.17.4-301.fc21.x86_64 
<tab>PWD := $(shell pwd)

default: 
<tab>${MAKE} -C ${KERNEL_SOURCE} SUBDIRS=$(PWD) modules  
clean:
<tab>${MAKE} -C ${KERNEL_SOURCE} SUBDIRS=$(PWD) clean

endif


and its contents should be copy paste from here and only change the path of Linux Kernel source tree.

Now to see kernel messages we need the log levels.

KERN_INFO       "6"      Informational message
                                    e.g. startup information at driver initialization    




KERN_DEBUG    "7"     Debug messages



Now login as root user, give following command:

# echo "7" > /proc/sys/kernel/printk
 
# cat /proc/sys/kernel/printk
7          4         1                7

Meaning of it is:

7                 4                   1                7
current    default       minimum    boot-time-default

Insert module

# insmod hello.ko

Whether module loaded or not check with:

# lsmod | less

Now check in logs whether message is appeared,

# cat /var/log/messages
or

# tail -f /var/log/messages

If you want to only debug related messages;

# dmesg

To clear,

# dmesg -c


Now, time to remove your module i.e. unloading

# rmmod hello


Check again for message

# dmesg

In this tutorial, we've completed building Linux kernel from source and loading of first "Hello World!" module.

Friday, 2 January 2015

Mozilla Firefox More than A Web Browser: Getting Started

Hi All,

This is to inform you that in my college we are organizing a mozilla oriented event known as

                   "Mozilla Firefox More than A Web Browser: Getting Started"
 
Only thing I feel sorry is that if public outside the college reading this, they won't be able to participate.
This time our college students only.  :)

We are having with us:

1. Ankit Gadgil:           Reps Mentor for Mozilla Firefox Pune
2. Diwanshi Pandey:    Reps for Pune Area, Lead Speaker of WOMOZ.
3. Siddharatha Rao:    Customization of Mozilla Firefox
4. Ankit Mehta:         

Please note:
1. Only 60 registrations are allowed.  (We will try to increase number a/c to your response).
2. It is on FCFS basis.
3. As this is free to attend hands on event, things you have to come with are:
    - Laptop
    - Installed with Latest Firefox: https://www.mozilla.org/en-US/firefox/all/ having language of your choice.

Date & Time:   17 Jan 2015 ,Saturday, 10:00 am to 4:00 pm
Venue:              Project Lab, 3rd Floor,
                        Dept of Computer Engineering,
                        MITAOE, Alandi,
                        Pune.

If you are going to establish new Firefox club and while doing event if you need job roles for Committee members for organizing event, Here is a sample draft I have made.

You can download it from:    http://goo.gl/qLtsDb

Thursday, 22 May 2014

IaaS based Private Cloud Features Compatible with AWS Public Cloud - Part 3

Now, we will take look at most important part of Cloud Computing called Orchestration.


10. Orchestration

Before going technical terminology, let's see its simple meaning. In general, it is very rare that one would say I've never heard of word Orchestra. What does it mean? It's a group of instrumentalists, especially one combining string, woodwind, brass and percussion sections. 
What makes it important is their combination and coordination. Without it, neither there will be fine tuned music nor they can play any song.

Similar, in cloud computing, Orchestration is the component and service which helps to manage and scale all the components external and internal to the Cloud system so that they can co-ordinate and communicate with each other effectively ensuring smooth running of operations.

For me, that's all about what cloud components are ! I would go for table in brief which will compare the cloud components in context with AWS Public Cloud:


Cloud Services AWS Eucalyptus Openstack Cloudstack*
Identity & Access Management IAM CLC keystone CloudStack management server
Compute Internally NC Nova CloudStack agent
Object Storage S3 Walrus Swift --
Block Storage EBS SC Cinder --
Networking VPC, Direct Connect CC (works with other Components) Neutron,Nova --
Image Internally CLC Glance --
Database RDS, Dynamo DB, SimpleDB None Trove --
Billing/Logging CloudWatch CloudWatch Ceilometer CloudStack usage monitor
Load Balancing ELB ELB Neutron CloudStack management server
Autoscaling Autoscaling Autoscaling Heat CloudStack management server
Orchestration Internally CLC Heat CloudStack management server



























*This section is under construction. Suggestions are welcome !


References:
1. http://zenodo.org/record/7571/files/CERN_openlab_report_Michelino.pdf

IaaS based Private Cloud Features Competing/Compatible with AWS Public Cloud - Part 2

Here is the second part:


5. Database:
Raghav's business is expanding day by day and he finds it hard to get skilled DBA on such a short notice where customer demand is on-fire. He himself good at MySQL but developer oriented aronly. What could be the another alternative for scaling and managing data at enterprise level? Cloud computing provides web service which manages database product. It is used for operating various types of databases without manual intervention so that it will be easier to manage customer data along with above discussed services.


6. Billing and Logging
Govinda is a sysadmin who has setup cloud platform for his company. Despite everything goes as planned, he is still facing issues regarding troubleshooting from users. Boss is asking Govinda that why is he unable to launch more than one instance. Developers asking him why they're not able to launch higher configuration of instances. Govinda is thinking hard over this problem. The issue is with the with metrics gathering: Billing and Logging. Using this cloud based service, one who manages the cloud services will be able to see the resource usage. Based on budget and other constraints, he can setup rules and parameters for management of future instances. Now, Govinda can analyze and gather data for this week. Also, he can use policies and alarms so that other users can use only they need, not they want.

8. LB (Load Balancing):
Keshav is managing the servers which are under heavy traffic during the afternoon period of the day. If somehow it fails, it also affects on performance of Keshav to manage the infrastructure responsibly. One can't predict nor deny any chances of failures in infrastructures especially based on traffic and requests per second to web servers. What should he do now? Here, the LB comes into picture. This service is used whenever load on the system will increase beyond specific capacity, the another copy of the same machine will automatically start to handle that load without affecting the original server. It helps to :
  1. Reroutes traffic from failed to running instances.
  2. Restores traffic from running to restored instance (failed instance running again) and first line of defense in network 
Check for more details: Load Balancing in AWS


9. Auto Scaling:
Madhav is big data analyst at 123 Company. He always has to play with 1TB or more data in his daily life. What if suddenly one day, the load increases on the machine and he is not able to provide analytics? How to manage load automatically? In such cases, Autoscaling comes handy. It is designed to manage load on the machines in such a way that whenever such scenario will occur, there will new instances start running to handle the load and as soon as the load decreased, the instances will be automatically terminated.


I know your next question will be "hey, you are mixing Autoscaling & ELB, aren't you?" I will say no. From what I understand, the main difference is only comes with word SCALING. Here is how:

Difference:
1. Type of Scaling:

In ELB,
You can only route & reroute traffic from one instance to other instance checking its state of health. It does not deal with number [horizontal scale] of instance or size [vertical scale] of instance. Just add running instance, irrespective of its resources.

In Auto Scaling,
You can do the both scaling types; either one or both. 

2.  Modes
In simple words, Auto Scaling provides which resources should be used to manage the load while ELB provides how to handle the load in well-designed and engineered manner.

Here is link for next article:  Part 3


References:
  1. http://searchcloudapplications.techtarget.com/definition/cloud-orchestrator
  2. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/WhatIsAutoScaling.html
  3. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/SvcIntro.html
  4. http://stackoverflow.com/questions/8426266/aws-autoscaling-and-elastic-load-balancing

Wednesday, 21 May 2014

IaaS based Private Cloud Features Competing/Compatible with AWS Public Cloud - Part 1

About this article series:

Welcome to tutorial series on Infrastructure as a Service (IaaS) Cloud Services with respect to Amazon Web Services (AWS). It'll help you to understand and map features of AWS cloud services with other IaaS based cloud platforms. I'd like to quote this line from 'The Matrix' movie: 

"You've to let it all go, Neo. Fear, doubt and disbelief!" (Ref: The Matrix:1999)


Cloud is like The Matrix; unless you enter you will not understand what it is.

Let's take a quick overview of most popular core components and at last, we'll see its codename in respective cloud platforms:

1. IAM (Identity and Access Management):
Imagine if an user has access rights of admin account and still maintaining their original role. Also what if total number of users are more than one million? It's a nightmare for any administrator. In case of cloud platforms, it's easily manageable through cloud services. Such a service exists in AWS to manage known as Identity and Access Management (IAM). It is the policy based service with which admin can give specific permissions to users so that they'll have restricted access to resources of their account and can't interfere with operations of others users or admin. For ex, user can see and run only his instances; not the other users accessing instances of same cloud platforms.

2. Compute (Computation Related Resources):

Rama has 1 GB RAM, 1 vCPU machine with size of 50 GB Total HDD. Now, if he wants to test Windows 8, he needs to buy new workstation compatible with Windows 8. On the other hand, using a cloud service, it is as simple as turning your system on. Here the power intromittent is Compute service. More powerful your machine in cloud, more faster and efficient your operations are. It provides you memory, CPU cores and size of Hard Disk you need for the tasks.

3. Storage (Object & Block):
Hari, is savvy Big data developer, who wants to store & retrieve his data from Cloud on frequent basis for his work. Which storage type should he use? There are two main services available: Object Storage and Block Storage. Object storage is helpful for storing and retrieving the storage data. The data is stored in the form of objects so that it can easily call over SOAP or REST API requests. The main advantage of this storage service is write once, read anywhere. He should use Block Storage when there is need of frequent update for large size data. In this case, data is stored as bytes/records as a bunch i.e. Block. The advantage of block storage is contents get updated at faster rate.

4. VPC (Networking):
Gopal wants to create his own network similar to that of a large organizational structure. It seems to be difficult at first but with help of cloud services, it is not. In cloud computing , networking service brings the power to create and architect your own networks. Along with it, you can also work with subnets, routers, firewalls, load balancers and many more.