2016 is coming to an end and it's the perfect time to reflect on the past year and look ahead to 2017. In the spirit of looking back on the year, it has been curated a list of the hottest articles ranked by page traffic in 2016.
We know that Postman is a REST Client that runs as an application inside the Chrome browser. It is very useful for interfacing with REST APIs Postman is a REST Client that runs as an application inside the Chrome browser. It is very useful for interfacing with REST APIs of your enterprise application.
Postman Pro is an updated API development tool chain, was released last week. The Pro tool chain expands upon Postman cloud by adding cloud collaboration, improving collection documentation & publishing, and taking our Postman monitoring tool out of beta.
The new Postman Pro will allow API developers to leverage the power of Postman throughout every link in the API tool chain.
Design Typically itz an event, lasting several hours or days, in which a team of professionals with different backgrounds come together. This focused tiny passionate group, works around a problem or idea. Objective is to build collaboratively a unique solution from scratch. Output is a working model of the conceptual idea using few production data.
Design principle is to sense the input, optimize the process and act the output.
Recent Experience Earlier, I used to have my personal Hackathon experiences. So far, I worked in leading consultancy and captive firms @ India. Being part of product company, this is the first Hackathon from my workplace.
So, I thoroughly enjoyed 36 hours work place Hackathon with my colleagues. Team member represents different role play like product manager, developer, tester, infra specialist.
Our pitch Our team build a pitch to leverage IoT and Big Data as part of product development.
Lesson Learnt Before the event, it is essential to make sure all your infra structure are setup and tested with ready to go state.
By design, it is expected to simplify your project as much as possible. It has been high probability that are not that important for the prototype and so you create the prioritize such events.
As a best practice, it is better to save at least last couple of hours for production testing and final bug fix.
On lighter side, you should plan to sleep and make breaks either you are too tired or your production is low. You shouldn't miss to feed you the proper food in a good amount of time interval.
Top 5 Benefits In my mind, Top 5 business / technical benefits of Hackathon, listed as below:
Democratization of Innovation
To my personal objective - Continuous Learning & Continuous Sharing, this Hackathon event created yet another opportunity.
Until the next event, I'm signing off from this space.
Google unveiled the Daydream VR headset last month, along with its Google Pixel and Pixel XL smartphones, and now the device is finally available for purchase starting November 10 next week.
The headset will be available on Google Store in five countries across the globe- US, Canada, UK, Germany and Australia.
Daydream VR platform is a major upgrade over Google Cardboard. With the introduction of the new platform, this gives developers the ability to create applications to deliver better productivity in different use scenarios.
Google has teamed with Hulu to deliver custom UI and content for the Daydream VR. Users can stream Hulu’s entire library of TV shows, movies and VR content.
Google Daydream View VR is priced at $79 and will be available in Slate, Snow and Crimson color variants.
We often get this question: now that Angular 2 is available, where do we start ??
On reading literally tons of article and blog posts on different Angular 2 topics, some of them are very good, and very deep, but might not all be appropriate for someone who just want to start on Angular 2.
David Cearley of Gartner has identified a top ten strategic technology trends for the year ahead. He defines “strategic” as those technologies that will have significant disruptive potential over the next five years.” He also notes that they are “prime enablers behind digital and algorithmic business opportunities.” Attached image is a summary of the trends.
This list differed from the lists for 2016, 2015, and 2014 inasmuch as there are more trends that are not yet implemented by even leading CIOs than in years passed.
In the modern world, collaboration and socialisation is the key to be success.
Crowd sourcing is the recent trend in the industry, in which the product / framework is completely built by open group with the voluntary mode.
In this context, I am so happy and proud to see our college professor / mentor. Today, I got to know her tutorial video release on Hadoop and Big Data platform. She educated the audience in clear way, on her own powerful style.
As an ordinary citizen, I always wonder/worry about the clean function of my government. I've tons of questions about the benefits of Digitization to the common citizens.
Digital India Digital India is an initiative of Government of India to integrate the government departments and the people of India. The primary aim is to ensure the government services are made available to citizens electronically by reducing the paperwork. Also, it aims to connect the rural areas with high speed internet networks.
The project is slated for the completion by 2019. This initiative is to offer the public healthcare, education, judicial services by all ministries, with 9 pillars as depicted above.
Use Case Herez one of the best use case - Electric Power Utilization through Digital India.
Union power ministry intends to provide energy efficient lighting, covering the entire country by 2019 and distribute 77 crore LED bulbs under the Unnat Jyoti by Affordable LEDs for All (UJALA) scheme. It aims an annual reduction in electricity bills of Indian consumers by over Rs 42,000 crore.
Technology Benefits As the blessed tech geek, astonished with the below Top-5 factors on browsing Indian govt power website
Real time update
Enjoy at http://www.ujala.gov.in/ Closing Note To be honest, lot of IT firms are struggling to achieve/leverage the emerging technology. Awesome achievement by Indian Government !!
Nostalgic is something of taking pleasure in experiencing the glories of olden days.
When I rewind my life clock by 25 years, I'm thrilled to recap the storage experience. During my Under-graduation @ late 80s, our lab had one powerful (note the point) PC (Personal Computer) from IBM; of course high cost too. The name of the system is IBM PC XT 286.
That powerful PC configuration was "Waugh" factor, then:
Processeor : 80286 @ 6 Mhz Main Memory : 640 Kb Ram. Graphic Adapter : CGA. Hard disk : 20 Mb. Floppy disks : 5"¼ 1.2 Mb & 3"½ 720 Kb. Operating System : IBM PC DOS 3.3. Bios dated : 21/04/1986.
The above picture is so called near super computer during my college days with 20 MB hard disk storage. Today, on watching the news of SanDisk's 1 TB SD card (nearly million times better), I'm amazed !!!
SanDisk has unveiled the biggest SD card in the world — a prototype card with an outrageous 1 terabyte of memory. Ref: https://techcrunch.com/2016/09/20/sandisks-1tb-sd-card-has-more-storage-than-my-computer/
Interesting to note that it was only 16 years ago that the company introduced its first 64 megabyte SD card, while two years ago they debuted the 512GB card, which was then the world's biggest. Things have moved fast, though, and compared to the 64MB card, today's 1TB version offers 16,384 times more storage.
This SDXC card is only a prototype at this point, with no details available on price or release date, but it's still an impressive milestone. The company says the 1TB card is necessary to match the increasing demand for memory-heavy formats, including 4K and 8K footage, 360-degree video and mixed reality (MR).
However, there will be some downsides. The 1TB card is certain to be prohibitively expensive, and at such a large capacity, read and write speeds are going to be comparatively slow. Now imagine the anguish if your 1TB card corrupts and you lose everything on it.
Itz no wonder, the world can see SD card of PetaByte or ZettaByte in a d(r)ecent time-frame. Isn't it a drastic IT growth in last few years?
Couple of weeks back, I wrote about Amazon Alexa and its functionality. Amazon is taking this emerging technology with dog-food mode i.e. to leverage in its own tablet product - Fire.
Amazon wants to be under the Christmas tree this year. It's cut the price of its new Fire tablet almost in half and added its popular voice assistant, Alexa, in hopes of making it a hot holiday item, despite a slump in overall tablet sales.
The new Fire HD8 tablet will cost $90, down from $150. Mixed-use battery life is up to 12 hours from 8, and the base storage is doubled to 16 gigabytes.
The biggest change is that the tablet will have Alexa functionality. That means that when users tap and hold the tablet's home button, they can ask the assistant for anything from weather reports to news queries, and also get the device do things like adjusting the lights or temperature on compatible smart-home devices.
Waugh, Silver Jubilee of Linux Kernel Development. Yeps, exactly 25 years ago, on August 25, 1991, which was a Sunday, Linus Benedict Torvalds informed Minix users that he was doing a free computer operating system as a hobby.
"Hello everybody out there using Minix. I'm doing a (free) operating system (just a hobby, won't be big and professional like GNU) for 386 (486) AT clone," read the announcement.
2016 report will show you how fast Linux is going, who is behind it and what they are doing to improve the code each day, as well as which companies are sponsoring it's development.
This report provides insight into the development trends and methodologies used by thousands of different individuals collectively to create some of the most important software code on the planet.
Linux kernel powers your Android or Ubuntu smartphone as you read this story on your handhold device. Itz used by search engine giant Google, as well as 99.9% of the websites you're accessing daily.
Linux kernel, as the core component of a GNU/Linux operating system, is everywhere around us.
Successful Silver Jubilee Celebration for Linux !!!
Read an interesting architecture article, adopted by PayPal.
I'm wondering how did Paypal take a billion hits a day system that might traditionally run on a 100s of VMs and shrink it down to run on 8 VMs. At the same time, it stays responsive even at 90% CPU at transaction densities.
Paypal has never seen before, with jobs that take 1/10th the time, while reducing costs and allowing for much better organizational growth without growing the compute infrastructure accordingly.
Last month, Amazon released Alexa Skills Kit, which allows outside developers to create new services to work with the voice-activation technology.
Developers can use the Smart Home API to enable smart home capabilities, such as controlling lights, door locks or alarms. They also can create custom skills using the ASK to design their voice user interface, and building cloud-hosted code that interacts with cloud-based APIs.
Among the companies drawn to Alexa is Macadamian, which has developed two of its own skills. One is a new service called "Fantasy Scoreboards," which allows users to control a WiFi-connected National Hockey League scoreboard when connected to Amazon Echo.
The company initially explored voice interaction through phone -- like with Siri on iOS -- before shifting to Alexa. Noted by Martin Larochelle, chief architect of Macadamian.
The company previously developed a skill to send text messages using the Twilio API. It is considering adding a third skill this summer, which likely will give users a way to set reminders for upcoming city services -- like recycling and garbage pickup days or other routine services.
Alexa Skills Kit is a free SDK for developers, providing a low-friction way to get an Alexa skill up and running within the space of a few hours. No experience in speech recognition or natural language is required, as Amazon handles the chore of understanding spoken word requests.
AWS Lambda can help in the development process, Amazon noted, as it runs the developer's code in response to triggers, and manages compute resources in the AWS Cloud.
Itz official, Verizon has acquired Yahoo in a $4.83-billion deal . The deal ends months of lengthy process which saw participation from companies including AT&T and a group led by famed wealth manager Warren Buffet.
Itz always nostalgic to put the reverse gear on looking back the history.
Yahoo began in 1994 as “Jerry’s Guide to the World Wide Web,” a list of websites curated by Stanford University students Jerry Yang and David Filo. It grew quickly as millions of Americans began turning on dial-up Internet connections and needed a home page that would direct them to all their essential destinations.
In 1996 it went public and rode the dot-com bubble to epic heights, reaching a peak of $500 a share in January 2000. Yet Yahoo missed the opportunity of a generation to convert its early lead and millions of users into more than just a portal.
Over the last four years, Mayer, a former Google executive, tried to right Yahoo’s ship. Announcing the big news, CEO Marissa Mayer sent a mail to Yahoo employees across the globe. The closing note of the mail, is:
"Yahoo is a company that changed the world. Now, we will continue to, with even greater scale, in combination with Verizon and AOL."
It’s an ironic end for Yahoo’s founders ever could have dreamed when they first launched.
Gupshup, a Silicon Valley-based bot builder platform, has announced a partnership with Cisco to connect chatbot developer capabilities with Cisco’s cloud-based collaboration service, Cisco Spark (a Slack-like messaging service for the workplace).
This will let developers quickly build and incorporate advanced chatbot functionality into new and innovative service offerings for Cisco customers.
The news comes just after an earlier announcement from Gupshup regarding the upcoming launch of bot templates for small and medium businesses (SMB), according to Economic Times. The announcement, made Monday, will allow SMBs to deploy bots — software that can run automated tasks — across multiple platforms, including SMS, Messenger, Slack, and Telegram.
Chatbots are fast transforming the way in which businesses use computers, providing a simpler, more conversational interface to advanced services, such as trawling through masses of company data, or surfacing necessary files and documents in a speedy manner.
By utilizing bots in the workplace, businesses can enhance productivity, off-loading many of the time-consuming activities that humans have had to do manually in the past.
For the last three years, China has topped Top 500 list of the most powerful supercomputers.
Couple of weeks back, Top 500 group announced that Tianhe-2 has been ousted with huge benchmark by another Chinese supercomputer, the Sunway TaihuLight, which is based at the National Super computing Center in Wuxi.
Recent report says the United States has lost its status for the first time as the country with the most systems on the list. As of now, China now has 167 systems to the U.S.’s 165 in Top 500 list.
In terms of India, IISc SERC is topped with 110th position, followed by IMBD's iDataPlex as 139, IIT Delhi 217, CDAC Param with 337, IIT Kanpur with 397.
Although super computing progress has slowed in recent years, there are still-more-powerful machines on the horizon as AI and Analytics are applied in the industry.
Alexa, which is Amazon’s cloud-based voice service. It powers voice experiences on millions of devices, including Amazon Echo and Echo Dot, Amazon Tap, Amazon Fire TV devices, and devices like Triby that use the Alexa Voice Service.
One year ago, Amazon opened up Alexa to developers, enabling you to build Alexa skills with the Alexa Skills Kit(ASK) and integrate Alexa into your own products with the Alexa Voice Service(AVS).
Starting this month, customers can browse Alexa skills by categories such as “Smart Home” and “Lifestyle” in the Alexa app, apply additional search filters, and access their previously enabled skills via the “Your Skills” section.
Some fun facts about the Alexa Skills Kit, Alexa Voice Service, and Alexa Fund include:
There are now over 1,400 Alexa skills and the catalog has grown by 50% in just over one month
Customers have made over 3 million requests using the top 10 most popular Alexa skills
Since January 2016, selection of Alexa smart home API skills has grown by more than 5x
There are now over 10,000 registered developers using the Alexa Voice Service to integrate Alexa into their products
There are tens of thousands of developers currently working on Alexa projects
The Alexa Fund has invested in 16 startups, with a focus on smart home and wearable products to date.
Over the next year, The Alexa Fund will be expanding investments into startups that focus on robotics, developer tools, healthcare, accessibility and more
Alexa Developer Platform references are listed below:
Apache Zeppelin is an open source GUI which creates interactive and collaborative notebooks for data exploration using Spark. You can use Scala, Python, SQL (using Spark SQL), or HiveQL to manipulate data and quickly visualize results.
Zeppelin notebooks can be shared among several users, and visualizations can be published to external dashboards. Zeppelin uses the Spark settings on your cluster and can use Spark’s dynamic allocation of executors to let YARN estimate the optimal resource consumption.
To run the prediction analysis, you need to create notebooks that generate prediction % and are scheduled to run daily. As part of the prediction analysis, we needed to connect to multiple data sources, like MySQL and Vertica for data ingestion and error rate generation. This enabled us to aggregate data across multiple dimensions, thus exposing underlying issues and anomalies at a glance.
Using Zeppelin, we applied many A/B models by replaying our raw data in AWS S3 to generate different prediction reports, which in turn helped us move in the right direction and provide better forecasting.
Zeppelin helps us to turn the huge amounts of raw data, often from across different data stores, into consumable information with useful insights.
What is AWS Lambda?
AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure.
AWS Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging.
All you need to do is supply your code in one of the languages that AWS Lambda supports (currently Node.js, Java, and Python)
Best Use Cases
AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second.
Itz an ideal compute platform for many application scenarios, best 5 use cases are listed as below:
Teletext.io is well known as the serverless start-up, entirely built around AWS, but leveraging only the Amazon API Gateway, Lambda functions, DynamoDb, S3 and Cloudfront.
This is not only a really scalable solution with an almost infinite peak capacity, but also a very cheap solution (< $90K) as per their publication.
On back of my mind, Martin Fowlers' Infrastructure As Code, is bumping. He said "Infrastructure as code is the approach to defining computing and network infrastructure through source code that can then be treated just like any software system."
Awesome experience on learning the emerging technology and its business value.
As I wrote in my weekly blog @ last month, Google I/O has just completed with some exciting industry updates. Letz see what are those new offers?
Google I/O 2016, Alphabet’s annual developer conference, has wrapped up for the year between May 18-20 at the Shoreline Amphitheater in Mountain View, California. This year, the announcements came thick and fast but many were not quite ready for general demoing or product release, so we’ll have to wait a little longer to see them in action.
Herez few major announcements from Google I/O 2016, interested to me.
Assistant is a kind of revamped version of Google Now. Itz a beefed-up voice assistant that can help you do much more than ever before.
It can now provide users with more natural two-way conversations and do much more than just schedule events and perform quick Google searches.
Home is the central hub for connecting Google Assistant with your connected home, smart devices and more. Itz a direct competitor to the Amazon Echo.
Like Echo, Google Home integrates a built-in bluetooth speaker and microphone all in small, sleek package. Home is meant to be your hub to all thing Google, right from your living room, kitchen, or wherever you place.
Allo is a new telephone number-based messaging app, which has 3 main aspects as self-expression, Assistant integration, and security privacy.
As Allo features end-to-end encryption, incognito chats, private notifications and expiring chats, it will be the “first home for Google Assistant”.
Duo is a new cross-platform video calling app, companion of Allo. Duo lets you initiate fast one to one video calls.
Duo is a companion video chat app that allows you to see the person calling you – in a live video preview – before you even answer the call.
Daydream VR (Virtual Reality)
Itz Google’s vision for an affordable mobile virtual reality platform. The first Daydream-ready devices will arrive later this year.
Android N will feature a VR Mode for putting chip sets into “performance mode”, add head-tracking algorithms, support sub-20 ms latency on mobile devices and render incoming messages and calls in 3D to appear in stereo.
On the software front, Daydream provides a standard VR interface for mobile and adds a VR category to the Play Store.
Google Chrome OS
Contrary to the rumors that Chrome OS would be folded into Android, Google officially denied the claim.
After tackling wearables, TVs, and autos, Google is in need of a real computer operating system. So, Chrome OS will live on in some form, even if it’s just functionality integrated into Android.
Itz the code name of an upcoming release of the Android OS (Operating System). It has few key features like multi window support, direct reply notification, data saver, picture in picture & no device flashing. Release schedule of N will be:
Complete set of Google I/O 2016 videos are available at YouTube series. Enjoy it at https://www.youtube.com/playlist?list=PLOU2XLYxmsILe6_eGvDN3GyiodoV3qNSC
Stay tuned for more on all the development from Google I/O
This week is turned to be to mark the 25th (“Silver Anniversary”) considering that Visual Basic (VB) 1st debuted to the world. A celebration so comprehensive it is also a marathon. VB has the journey across each era, from VB 1. to VB6 to the early days of VB.NET to Roslyn
Despite much of the hate VB has gotten over the years, it served an insanely important purpose in the rise of internal business software. Itz amazed at how much of their world still runs on VB6 applications. Something like 1/3 of insurance software still does.
The transition from VB6 to VB.NET was a really sad one as it lost a lot of people - .NET is a lot more difficult than it was. The result is that an entire group of people simply stopped making software and now we have businesses running on applications that are more than 20 years old that some random person in the company threw together over a week.
There's a huge gulf between building a VB6 app and throwing together a web app today and despite much of the progress that has been made, we've taken some big steps back in terms of accessibility.
The difference between VB.NET and C# is pretty superficial, but I sincerely hope that someday people can experience the magic that something like VB6 offered. A lot of people got their start in programming using Visual Basic.
Google I/O is an annual developer conference at which the company announces new hardware and software, hosts educational sessions pertaining to its various products and services, and broadly tries to build hype and warm feelings in the hearts of creators and fans who watch online.
The event begins with a keynote. This year, the host will be Google CEO Sundar Pichai. We expect him and his colleagues to make a number of announcements, like giving an official name to its new mobile operating system, revealing more of Google's plans for augmented and virtual reality, and detailing how Android would work on the desktop.
How to watch
San Francisco: 10AM / New York: 1PM / London: 6PM / Berlin 7PM / Moscow: 8PM / Beijing: 1AM (May 19th) / Tokyo: 2AM (May 19th) / Sydney 3AM (May 19th).
Last Saturday (May 7), I took part in Design Workshop, conducted by ThoughtWorks Chennai division.
Theme of the workshop was "Incremental Design", which is considered as the chance to validate the participant's software design/coding skills. This session was hosted by GM Sivsu with around 100 coding participants.
Over a period of 4 hours, the participants found out how Incremental Design works by solving the given Retail use case. It is allowed to have either pair with a fellow participant or work solo on a given problem statement in any programming language of your choice. A series of check points during the course of the workshop will ensure that we pause, observe and learn to design the right way.
Personally, I learnt the below Top 5 key items on solving the given problem statement.
1. Understanding the User
2. Functional Coverage
4. Defining Core features
1. Understanding the User
Usually Personas are created and validated through User Research in the field.
2. Functional Coverage
Functionality is covered with the basic/strong test cases and the matching result set.
At any point of failure cases, App should be stable with graceful way of exit/error.
4. Defining Core features
Inevitably when building a product, teams are constrained by time, money, resources, etc and can’t build everything they want to at once. At the end of the workshop the solutions are compared with the different feature ideas they had come up with, to select the best.
As the core theme of Incremental design, the later change requests should be adoptable to extend further.
With the interesting hands-on mode of learning, signing off this week activity.
In the latest Forrester Wave report for Big Data Hadoop Cloud Solutions, Microsoft Azure came on top beating both Google and Amazon Web Services.
Microsoft specifically was called out for having a cloud-first strategy that is paying off. Tiffany Wissner, Senior Director Product Marketing, Data Platform mentioned the following in a recent blog post,
Our goal is to make big data processing and analytics simpler and more accessible to bring big data to everybody. We do this through the Cortana Intelligence Suite which offers a fully managed big data and advanced analytics solution in the cloud.
Within Cortana Intelligence is the Azure Data Lake which includes HDInsight, a managed Hadoop service that runs Hortonworks Data Platform, Data Lake Analytics, a new service built on Apache YARN that dynamically scales your big data jobs, and Data Lake Store, a single repository to capture data of any size, type, and speed. We also offer IaaS images in Azure Marketplace that deploy any third party Hadoop distribution (Hortonworks, Cloudera, and MapR).
After 37-criteria evaluation, Forrester recognized Microsoft Azure as a leader in their Big Data Hadoop Cloud Solutions.
On swimming in .NET world for last one decade, the first obvious question strike in my mind was 'Why do you need to use Visual Studio Code?'. Existing Visual Studio is pretty much sufficient for my development activities.
Hmm.. Visual Studio Code is completely a separate product from Microsoft. Itz not that powerful compare to the traditional Visual Studio but itz not worth to compare them either. However, VS Code has its own advantage.
In my view, 2 bullet points of the product differentiation
While Visual Studio and Visual Studio Code share a name, there is little similarity between the two. I won't agree to the point that Visual Studio Code is a "stripped down" version of Visual Studio
Visual Studio Code provides you with a new choice of developer tool, one that combines the simplicity and streamlined experience of a code editor with the best of what developers need for their core code-edit-debug cycle.
Though it is not really user friendly like VS, but it is super fast. The loading time is almost immediate with ultra speed.
Also, it has already integrated a basic GIT client, code auto completion and debugging tools. This lean factor supports the productivity of the niche developers.
Visual Studio Code is the first code editor, and first cross-platform development tool – supporting OSX, Linux, and Windows – in the Visual Studio family.
If you prefer a code editor centric development tool or are building cross-platform web and cloud applications, VS Code is the right choice
Though, VS Code is less than a year old technology, the adoption/growth is adorable, as depicted the usage metric of the product.
Anyone running QuickTime on their Windows computer is being urged by the government to uninstall the program right away.
The US government is urging PC users to uninstall Quicktime from their computers, over fears that weaknesses in the software could leave them vulnerable to cyber-criminals.
As discovered by security firm Trend Micro, Apple, which develops Quicktime, is ending its support for the Windows version of the software.
This means it will no longer be issuing security updates, making it easier for hackers to use the software as a way into their targets' computers. The firm's experts also identified two "critical vulnerabilities" affecting the software, which could provide a window for hackers to launch cyberattacks against users.
Trend Micro's warning was echoed by the US Department of Homeland Security's Computer Emergency Readiness Team (US-CERT), which said users who still have Quicktime for Windows running on their machines could now be vulnerable to "loss of confidentiality, integrity or availability of data," as well as facing increased risks from viruses and other security threats.
Microsoft officials blogged about availability of the Visual Studio Code (VS Code) 1.0 release on April 14, noting that more than 500,000 developers are using VS Code each month. There have been two million installations of VS Code since the public preview was released in March 2015.
Full-blown Visual Studio runs only on Windows, and supports projects and solutions. VS Code is based on files/folders, and is especially suited to building cross-platform Web and cloud applications.
VS Code can trace its roots back to work done by Microsoft's "Monaco" team, which was charged with building a subset of Visual Studio that would run in a browser. It evolved to become an editing tool that could be installed on Windows, OS X and Linux and used for any type of code editing, navigating, debugging and working with Git.
It can be downloaded for free and works on Windows 7, 8 and 10; Linux x64 (Debian, Ubuntu, Fedora, CentOS), and OS X Yosemite and El Capitan.
Today, I read an interesting article @ IndiaTimes, related to technology show case by an Indian.
The US government airport security agency recently contacted tech giant IBM to create an app for managing passengers in airports.
The app didn’t do something particularly complicated; it randomly directed people in queues left or right on the press of a button. Just like any other big tech company IBM charged a premium amount, $ 1.4 million (9.5 crore) to be precise.
It was all fine till Ex-IBM employee Sandesh Suvarna entered the scene. He decided to make the app all by himself.
And it took him around 4 minutes to re-create a $1.4 million app.
And if that doesn't sound impressive enough, Sandesh completed the whole process while making a video of it.
As a feather in cap of Open Source Strategy by Microsoft CEO Satya Nadella, Linux BASH (Bourne Again SHell) and Ubuntu binaries are part and parcel of Windows 10.
During this week's BUILD conference, keynote speaker Kevin Gallo announced that you can now run "Bash on Ubuntu on Windows." This is a new developer feature included in a Windows 10 "Anniversary" update, partners over at Canonical - creators of Ubuntu Linux.
Why does it matter? It lets you run native user-mode Linux shells and command-line tools unchanged, on Windows.
Wim Coekaerts is well known for transforming Oracle into a Linux-dominated company. Few hours ago, Fortune confirms his movement to Microsoft. http://fortune.com/2016/04/01/microsoft-snags-oracles-linux-guru/
In his Oracle tenure, he brought the company its first Linux products; moved Oracle's programming staff from Windows to Linux desktops; and turned Oracle into a Linux distributor with the launch of its Red Hat Enterprise Linux (RHEL) clone, Oracle Linux.
Interesting times lie ahead for both Microsoft and Linux - MS-Linux, as part of his entry. How do I start playing?
After turning on Developer Mode in Windows Settings and adding the Feature,
you just to need to run bash and are prompted to get Ubuntu on Windows from Canonical via Windows Store.
Isn't it so cool ?
Highly scaling IIT assignments, urged me to run Bash on Windows with few choices in recent times
Cygwin is GNU command line utilities, compiled for Win32 with great native Windows integration. But it's not Linux.
2. HyperV and Ubuntu
Run an entire Linux VM - Virtual Machine (dedicating x megs of RAM, and y gigs of disk) and then remote into it (RDP, VNC, ssh)
Now, the current release is not Bash or Ubuntu running in a VM. This is a real native Bash Linux binary running on Windows itself.
This is an genuine Ubuntu image on top of Windows with all the Linux tools like awk, sed, grep, vi, etc. It's fast and it's lightweight.
The binaries are downloaded using apt-get - just as on Linux, because it is Linux. You can apt-get and download other tools like Ruby, Redis, emacs, and on and on.
Let me put it in simple term. We used to access the local file system using Windows Explorer; the same can be accessed using Bash Linux command-line (ll) tools
As the tutorial, Rich Turner and Russ Alexander recorded a Build 2016 session introducing and demonstrating Bash running on Ubuntu on Windows.
Interestingly, this product is fast and lightweight; itz the real binaries.
This disruptive technology is brilliant for developers that use a diverse set of tools like me. Enjoy, if you wish.
As the result of week end's reading, herez an interesting Gartner's update on "Critical Capabilities for BI (Business Intelligence) Analytic Platform" .
The very first use of what we now mostly call business intelligence was in 1951, at Lyons Electronic Office, powered by over 6,000 vacuum tubes. Itz about “meeting business needs through actionable information”. Linear equation of BI growth, is depicted as the attachment.
The BI and analytic platform market has undergone a fundamental shift. During the past 10 years, BI platform investments have mostly been in IT-led consolidation and standardization projects for large-scale system-of-record reporting.
As demand from business users for pervasive access to data discovery capabilities grows, IT wants to deliver on this requirement without sacrificing governance — in a managed or governed data discovery mode.
Business analytic of tomorrow is focused on the future (Predictive) and tries to answer (Prescriptive) the questions: What will happen? How can we make it happen?
Predictive analytic encompasses a variety of techniques from statistics, data mining, and game theory that analyze current and historical facts to make predictions about future events.
BI has passed a tipping point as it shifts away from IT-centric, reporting-based platforms
Early entrants to the data discovery market may have strong capabilities in interactive visual data discovery
Higher differentiation score by using the emerging capabilities — such as search, embedded analytics, collaboration, self-service data preparation and big data.
Few predictions to define BI Analytic Road Map:
By 2018, data discovery and data management evolution will drive most organizations to augment centralized analytic architectures with decentralized approaches.
By 2018, smart, governed, Hadoop-based, search-based and visual-based data discovery will converge into a single set of next-generation components.
By 2020, 80% of all enterprise reporting will be based on modern business intelligence and analytics platforms; remaining 20% will still be on IT-centric, reporting-based platforms because the risk to change outweighs value.
As per Gartner report, BI market has shifted to more user-driven, agile development of visual, interactive dashboards with data from a broader range of sources. With my rich experience on building the financial enterprise data hub, I can sense the breath & depth of "broader range of sources"
Hadoop-as-a-Service (HaaS) vendor Altiscale is moving up the stack with a new service called Altiscale Insight Cloud, which sits on top of existing service Altiscale Data Cloud. How it works?
Ingest services consist of a user interface over jobs that run on Apache Oozie, and allow the definition of validation rules on the ingested data. Analysis functionality is provided by an OEM'd implementation of Alation, a product which acts as a data catalog. Underneath Alation, Altiscale has configured the Hadoop cluster such that Hive and Spark SQL point to exactly the same data files, and either technology be used to satisfy queries.
Insight Cloud nicely finishes off the raw infrastructure of Altiscale Data Cloud with some basic functionality to make the combination of Hadoop and Spark more usable, but without reinventing the wheels that BI and Big Data analytics players have in-market already.
Altiscale says Insight Cloud is a Hadoop/Spark offering that is very BI tool-ready, so that users of Tableau, Excel or other common self-service tools can more readily attach to and analyze Big Data.
Pricing is consumption driven, and at $9,000/month for 20TB of storage and 10,000 "task hours". Having Insight Cloud in-market makes Altiscale more competitive with fellow HaaS provider Qubole.
Looking back at the past decade, it is pretty impressive to see just how much the IT world has changed. Even more impressive, the change is not limited to technology. Business models have changed, as has the language around it.
A decade ago we would not have spoken of the cloud, micro-services, server-less applications, the Internet of Things, containers, or lean startups. We would not have practiced continuous integration, continuous delivery, DevOps, or ChatOps
Today, keeping current means staying abreast of developments in programming languages, system architectures, and industry best practices. It means that you spend time every day improving your current skills and looking for new ones.
Itz great to turn back and look at the success path of AWS:
In Today's application architecture, any type of distributed App needs 3A support.
Software Engineers are always called on to authenticate or authorize access to applications, systems, or apps. And if they’re not, they should be. Constantly.
The outline is roughly defined in the below 4 steps:
A user accesses the Audit Console with an non-authenticated request
The user is redirected to the CAS login server
After logging into the CAS, the CAS server issues a ticket for the username
The AuditConsole validates the ticket and loads the authenticated user from the user database
There are three major standards, namely OAuth, SAML, and Open ID. Google, Amazon, Facebook, and just about any other major internet application provider supports at least one of these standards and most support all of them.
An emerging authentication technology is floating in the industry namely CAS (Central Authentication Service).
CAS is an enterprise Single Sign-On solution for web services. Single Sign-On (SSO) means a better user experience when running a multitude of web services, each with its own means of authentication. With a SSO solution, different web services may authenticate to one authorative source of trust, that the user needs to log in to, instead of requiring the end-user to log in into each separate service.
A number of out-of-the-box solutions exist to enable web services written in a specific language, or based on a framework, to use CAS. This would enable deployers to implement a SSO solution in a matter of hours.
In the similar line, letz catch up the other standards quickly.
OASIS started work on SAML in 2001. The common implementations use XML data passed between various subsystems to authenticate users. OpenID addresses the same problem, and experience has shown it to be a bit more scalable.
Brad Fitzpatrick initiated OpenID in 2005, with the support for signing and strong encryption. By 2008, it had been supported by Sun Microsystems and Yahoo! for authentication and authorization.
Blaine Cook started working on OAuth in 2006 while working on OpenID at Twitter. Designed for HTTP, OAuth uses access tokens as the basis for user authentication, a common pattern.
Enterprise Use Case
You might be little confused what to use? Answer is purely based on your application use case.
CAS centralizes authentication. If you want all your applications to ask users to login to a single server.
OpenID decentralizes authentication. If you want your application to accept users login to whatever authentication service they want.
OAuth is not about Single Sign-On. It handles authorization, which is about letting the user to control how their resources may be accessed by third-parties.
As per your application requirement & specification, the appropriate industry standard pattern/practice should be leveraged. Not other way round !!