Blog

Jul 6, 2020

Japanese supercomputer is now the fastest in the world also aiding in tackling the pandemic

Supercomputers are probably the most reliable research tools for scientists and researchers. They play a very significant role in the field of computational science and a wide range of computationally intensive tasks and jobs such as in quantum mechanics, climate research, weather forecasting, molecular modeling, physical simulations, properties of chemical compounds, macro and micro molecule analysis, aerodynamics, rocket science, nuclear reactions, etc. Faster the supercomputer faster and accurate is the computational ability for research. Engineers and scientists incessantly develop better supercomputers with the advancement of time. Hence annually the supercomputers from all around the world are ranked according to their speed. This time Japanese supercomputer ‘Fugaku’ which is at the RIKEN Centre for Computational Science in Kobe has topped the list. It is a successor to the ‘K computer’ which topped the list in 2011. Fugaku will be fully functional from 2021. This supercomputer has been mended with the Fujitsu A64FX microprocessor and the CPU has the processor architecture based on ARM version 8.2A which adopts the scalable vector extensions for supercomputers. Fugaku was aimed to be 100 times more powerful than its predecessor the ‘K computer’. It has been recorded with a speed of 415.5 petaflops in the TOP500 HPL results which is 2.8 times faster as compared to its nearest competitor ‘Summit’ by IBM. Fugaku has also topped the list of other ranking systems like the Graph 500, HPL-AI and HPCG where the supercomputers are tested on different workloads. This is the first time that any supercomputer has topped all the four ranking systems which makes it significant reliability for future purposes. The cost of this supercomputer was estimated to be around 1 billion USD which is around 4 times more than that of its next competitor ‘Summit’. This humungous cost on the project has caused a significant controversy from many experts. According to the New York Times, similar featured exascale supercomputers will be developed in the near future with a very low cost as compared to Fugaku. There has also been heavy criticism of the government as some speculate that the government is spending way too much on this project just to be primal on the list amidst the pandemic. Recently Fugaku is being used in the research for the drugs of Covid-19, diagnostics, and simulation of the spread of the coronavirus. It is also being used to track and improve the effectiveness of the Japanese app used for contact tracing in case of contamination. According to the Japan Times, in the latest research, the supercomputer was used to conduct molecule level simulations related to the drug for the coronavirus. A simulation on 2,128 existing drugs was made and picked dozens of other drugs that could bond easily to the proteins. This simulation was run for 10 long days. The results were quite accurate as 12 of the drugs detected by it were already undergoing clinical trials overseas. This research exalted the hopes of scientists for a remedy of the virus. The expert team will continue their research using Fugaku and they have also announced that they will negotiate with the potential drug patent holders so that clinical trials to develop a possible drug for the virus can be carried out. This will allow starting early treatment of the infected people. According to the experts, the supercomputer will also be likely effective to predict and study earthquakes in the future. Japan has a bad history of earthquakes since the country lies above the junction of many continental plates as well as the oceanic plates surrounded by volcanoes. Fugaku can detect chances of earthquakes which will allow the government and the locals to follow an escape plan from the natural disasters.  
Jun 22, 2020

Bose-Einstein Condensate, the fifth state of matter has been made onboard ISS

Scientists on the International space station have made the fifth possible state of matter known as the Bose-Einstein Condensate. The other four classical states of matter are solid, liquid, gas, and plasma. Bose-Einstein Condensate or the BEC is classified as a modern state of matter. What actually is a Bose-Einstein Condensate? Basically, Bose-Einstein Condensate is formed when a very dilute and low-density gas of bosons are cooled down to a very low temperature, a temperature that is very close to the Absolute Zero (-273°C). This temperature is low that the atoms of the boson particle occupy the least and the same quantum state. At a very low and same quantum state, the distance between the atoms can be compared to their wavelength, this extremely minute distance between these atoms allow them to behave as a single atom. This behavioral change allows the microscopic quantum phenomena to act as a macroscopic phenomenon and hence allows to detect even the non-detectable. BECs are made in the coldest place of the observable universe, The Cold Atom Lab (CAL) which is a lab in the International Space Station orbiting at a height of 408 km! Yes, the Cold Atom Lab is the coldest place in the known universe and has the capability to cool down the particles in vacuum down to one ten-billionth (1/10^10) of a degree above absolute zero. That temperature is equivalent to the absolute zero but not equal to the absolute zero as temperature as low as absolute zero is not possible practically. How is a BEC prepared in the Cold Atom Lab? To prepare the BEC, atoms of Boson in the form of gas are injected into the Cold Atom Lab. These atoms are then trapped and confined within a dense space with the help of a magnetic trap made using electric foils. Once the atoms are trapped laser beams are used to lower down the temperature of the atoms. Once the fifth state of matter is reached the main problem of studying and analyzing it commences. To study the condensate the atoms are released from the magnetic trap to analyze its characteristics. When the atoms of the BSE are allowed to separate the temperature of the particle further reduces gradually as gases tend to cool down when they expand. But if the atoms get too far apart then they don’t behave like a condensate and start to show characteristics of individual and multiple atoms again. This hypersensitive nature of the particle allows the researchers only a tiny span of time to study it. Gravity also plays a very crucial role in the experiment which is why the experiment ought to be done in space rather than on Earth! Why is the experiment carried out in the International Space Station? There is a significant reason for carrying out the experiment in the International Space Station or in space generally. If the experiment is performed on Earth while increasing the volume of the condensate, the gravitational force of earth will attract the atoms downwards in the apparatus, and thereby the atoms will spill out on the base of the apparatus. To tackle the hindrance of the gravitational force of earth researchers came out with a plan to allow the condensate to free fall creating a perpetual escape from the effect of gravity on it. Earlier this method was tried in Sweden where the condensate in the apparatus was allowed to free fall at a height of 240km in the lower orbit of Earth. This created a free-fall condition of approximately 6 minutes. Eventually, the International Space Station was decided for the experiment as objects, satellites and the ISS are in a state of permanent free-fall in the lower orbit of the earth. This principle allowed this experiment to be carried out for a longer period of time providing enough time and data to analyze and study the live form of the Condensate. This experiment was carried out for a total period of 1.118 seconds albeit the goal of the researchers is to detect the live condensate for a significant period of more than 10 seconds. The Cold Atom Lab was launched by NASA in 2018 at an estimated budget of $70M. The lab is just 0.4m³ in dimension and contains the lasers, magnets, and other essential components to control, trap, and cool down the atomic gas for the experiment. The atoms are initially held at the center of a vacuum chamber and later transferred onto an ‘atom chip’ which is located at the top of the vacuum chamber. Further, the fractionally hotter atoms are eradicated from the chip using radio waves thereby leaving behind extremely cold atoms at a temperature remainder at less than a billionth of a kelvin. Conclusion Although the study and experiment of this highly irregular new state of matter are in its inception in the future it can serve extremely significant inventions and discoveries. Being an ultra-sensitive particle Bose-Einstein Condensate can be the basis of ultra-sensitive instruments that can be used to detect faintest signals and other mysterious phenomena in the observable universe like the gravitational waves and dark energy. Researchers have also counted its significance in the construction of inertial sensors like the accelerometers, gyroscopes, and seismometers. Hundreds of other similar and crucial experiments and studies can be performed in the International Space Station which proves its significance with the property of free fall. Currently, scientists and researchers are interpolating with the new state of matter creating unique and arbitrary conditions in a hope of discovering and inventing something novel. Though they can now create Bose-Einstein condensate in the space, they are trying their best now to increase the duration of the experiment.
Jun 17, 2020

Pandemic has pushed the self-driving cars to train in the virtual world

The world is turning to automation and so is the automobile industry. There has been a rapid and significant expansion and development in the autonomous vehicle industries and the AIs controlling the vehicles on the roads even in the dreaded conditions. After the coronavirus pandemic different self-driving automobile industries and start-ups have to lay off their real-world data collection work which required a team of operators and the vehicles itself on the road. The lockdown doesn’t allow these organizations to work ethically on the streets for the autonomous driving industry. But this lockdown has derived new ideas for this industry. The researchers have come up with new ideas and techniques of creating a virtual simulated world for the training and development of these automated vehicles, all they need is the data they have collected over the years in the real world and map them on the virtual world simulators. Apparently, Waymo which is a software company in the self-driving industry and Alphabet Incorporation as its parent company has offered its gleaned data and information to the research organizations for the development of the virtual world simulator and autonomous driving. Waymo’s role in data sharing has been considered significant and crucial because the vehicles of Waymo have already covered millions of miles on the roads in different conditions. Other companies like Lyft and Argo AI have also contributed majorly by open-sourcing their data sets. The data is collected via different high technology devices in the field. The vehicles are covered with multiple sensors including several cameras, RADARs, and LIDARs (Light Detecting and Ranging). The equipment bounces the laser off the surface of the nearby objects and hence 3D images of the surroundings are created. Waymo’s data contained 1000 segments of each apiece encapsulated 20 seconds of continuous driving. More new firms have decided to contribute the data to the researchers where transparency will play a significant role. Data labeling has been an integral part of the simulators parallel to the 3D images generated. The organizations are now equipping the operators of the vehicles with the knowledge of Data labeling instead of just laying them off. This will compose the industries with new skilled associates who will come handy after the lockdown when they will resume their initial roles. Aurora Innovation which is a Palo Alto based company has taken a similar approach to join their operators in the data labeling sector. New companies like the ‘Parallel Domain’ provide the autonomous vehicle companies with a platform that generates a virtual world using computer graphics. Parallel Domain was started by former Apple and Pixar employee Kevin McNamara who has experience in the autonomous system projects said that “The idea being that, in a simulated world, you can safely make a mistake and learn from those mistakes, also you can create dreadful situations where the AI needs to be trained essentially”. Aurora Innovation on the other hand claimed to be using their “Hardware in the loop” simulation (HIL Simulation) which is a simulation technique that is used in the development and test of a complex real-time embedded system. This simulation helps in adding all the types of complexities that a system should sustain. According to Chris Urmson, this procedure is aiding them to detect the software issues which can defy the developer’s laptop system and even the cloud instances and may manifest in real-time hardware. Another Autonomous trucking start-up ‘Embark’ has invested in software that could test the vehicles and the components offline which allowed them to test the vehicle control system including the brakes, accelerators, steering wheel, and other significant parts. All the parameters were checked with an extreme degree of command inputs. Nvidia which is a leading graphic processor and AI development organization is also helping some big companies like Toyota with its Virtual reality autonomous vehicle simulator known as ‘Nvidia Drive Constellation’. Drive Constellation uses high fidelity simulation to create safer, more cost-effective, and more scalable simulators for the training of autonomous vehicles. It uses the computing horsepower of two different servers to deliver a cloud-based computing platform, capable of generating billions of qualified miles of autonomous vehicle testing. Powerful GPUs generate photoreal data streams that create a wide range of testing environments and scenarios. The main focus of concern is the pandemic and how these organizations will tackle such situations. Scale AI is another company that is helping numerous automation industries like Lyft, Toyota, Nuro, Embark, and Aurora in detailed labeling of the collected old data. This detailed labeling is achieved via ‘Point Cloud Segmentation’. For the newcomers point cloud segmentation is the process of classifying point clouds into multiple homogeneous regions. The points in the same region will have the same properties. The segmentation is challenging because of high redundancy, uneven sampling density, and also it lacks the explicit structure of point cloud data. This method is used to encode the correspondence of each and every point on the 3D mapping and hence is able to differentiate between the pedestrians, stop signs, lanes, footpaths, traffic lights, other vehicles, etc. The Scale AI team is also encoding a 3D map for simulation using the ‘Gaze Detection System’. This will allow even to encode the direction of the gaze of the pedestrian, any cyclist, or the driver of other vehicles so predict their movement i.e. whether the pedestrian is going to cross the road or not. The development of this technology will allow the AI to guess the next move of the pedestrian or the driver allowing the least possibility of an accident. The pandemic has not just made us adapt to the situation but has also allowed the researchers to make the technology to adapt to this dreadful situation. Such developments in the field of technology show the constant endeavor of mankind to escalate in the field of constructing a better society. The world is ready to be majorly automated in the coming years. The autonomous vehicle industries are rising exponentially. Even the pandemic and the resulting lockdown aren’t enough to curb the rising innovation. Soon the self-driving vehicles will be on the streets.
Jun 13, 2020

AI developed by Nvidia recreated the game Pac-Man from scratch by just watching the gameplay

Photo by Giorgio Trovato on Unsplash World’s best graphics processing company Nvidia is developing some futuristic AIs. Nvidia has been working on several Artificial Intelligence projects and has been carrying out major research in this field for a long time now. This time the company has extended its boundaries and has developed an astonishing Artificial Intelligence which recreated the retro classic Japanese game Pac-Man on its 40th anniversary from scratch by just watching the gameplay. The name of this AI is NVIDIA GameGan which is basically a Neural Game Engine. How does this AI recreate a game by just watching the gameplay? Well, the researchers have said that the basic principle used here is ‘Model-Based Learning’ in which the entire logic of the game including the controller inputs is stored on the neural networks from where this information is further developed into that game from scratch and frame by frame. Hence there is no rendering of coding or images required for the AI. The AI though was not able to re-capture the images of the ‘Ghost characters’ of the game which are meant to chase and kill Pac-Man and hence resulted in a blurry image of these characters. This happened because the movement of the Ghosts in the game is the result of a complex algorithm and each of these ghosts is programmed with these complex and unique algorithms to determine their movement across the maze. Albeit the programming algorithms of the Pac-Man are a lot less complex than that of the ghosts as its movements are fixed to the controller inputs. The basic architecture of the GameGan is divided into three parts which are the Dynamic Engine, The Rendering Model, and The Memory Storage or the ‘memory module’, and this architecture works in two halves. In the first half, the Neural Game Engine or the GameGan replicates or tries to copy the input data visually through the game, and in the second half, this set of logical input data is then compared with the data of the original game. If the generated data matches the original data of the source, the game is then generated by the AI else if it does not match the data is rejected and again sent in the process of correct data generation. This goes in a perpetual loop until the data matches accurately. Sanja Fidler who is Nvidia’s director of AI in Toronto Research Labs told that GameGan had to be trained on 50,000 episodes of Pac-Man to generate that fully functional game which requires no underlying game engine. Since it was impossible for a human being to generate this humongous data of 50,000 episodes the help of an AI agent was taken to generate the data. The initial challenges included the invincibility of the AI agent as it was so good in the game that it hardly died in any round. This resulted in the creation of the game in which the Ghosts just followed the Pac-Man arbitrarily and are unable to reach it ever. The memory storage or the ‘memory module’ of the GameGan AI generated a new aspect to it according to the researchers. The memory module allowed the storage of the internal map of this game world which is actually the static element of the game rather than the dynamic elements like the Pac-Man and the Ghosts. This will allow the AI to create new maps, levels, and worlds by itself without any human interventions. Thereby the gamers and the users will be gifted with uncountable new maps and game worlds. This will enhance the dynamics of the game exponentially. Advantages and Future aspects of the GameGan AI There have been several advantages predicted by the researchers as well as the gamers themselves on these new characteristics of the AI. The biggest advantage of this AI will be the speeding of game development and creation. The creators will need not have to code from scratch for new layouts and levels of a certain game, the AI will eventually create new game worlds visually. The AI will simplify the development and creation of new simulation systems for training autonomous machines. This will allow the AI to learn the rules of the actual working environment even before interacting with any other real object of the world. By just visual data in the near future, the machines will be able to drive a car, go for grocery shopping, play a sport, learn laws of physics in the real world, etc. which will be a humongous achievement for development purposes. The AI will help in a very easy transfer of the game from a particular Operating System to another Operating System. Hence the game will not have to be developed again with the codes for various Operating Systems, the AI will automatically do it. The game can be compressed by the AI in the memory module of the Neural links or the Neural Networks and can be stored there permanently allowing the continuous development of the statics and dynamics of the game totally by the AI. This AI in the near future will allow automated machines to outperform humans in dangerous and catastrophic situations carrying out experiments and rescue operations. Conclusion Experiments, researches, implementation, and development on new characteristic AIs are being performed by various other scientists, researchers, and engineers from all across the world. The new age of machines and Artificial Intelligence is commencing and soon all of us will be equipped or will be aided with various efficient AI robots with different capabilities which will curb time wastage and will accelerate human development to novel extents. GameGan is an exquisite example of development in machine learning and deep learning of the computers where the possibility of developing a machine visually is now made real. This AI will be used extensively in the near future to generate new simulators without a set of codes to ponder over and will allow it by just training the complex neural networks. We hope to watch new and amazing AIs by Nvidia and other organizations. Leverage machine learning in your organization with Tipstat. Contact us here Interested in working with Tipstat on AI? Check out our open positions here
Jun 9, 2020

OpenAI’s new AI Model can generate songs similar to Elvis Presley, Katy Perry, and more!

Musical AI is fast evolving. Many independent organizations are coming up with impressive AI solutions to implement machine learning as a tool in musical workflows, for example, OpenAI, an independent research organization which aims at developing “friendly AI,” has delivered many impressive AI tools over the last few years. The organization, for example, has created a language generating tool called GPT has recently added Jukebox.  Jukebox: An AI that generates raw audio of genre-specific songs might not be the most practical application of AI and machine learning but given that it can create new music just by providing genre and lyrics as input is quite astonishing. Jukebox can also rewrite existing music; generate songs based on samples; and even do covers of famous artists. Samples are offered in the voice of Elvis Presley, Katy Perry, Frank Sinatra, and Bruno Mars ( at jukebox.openai.com). The results are nowhere near realism. But listening to ‘Katy Perry’ or ‘Frank Sinatra’ in different styles shows that the Jukebox is capturing some aspects of their music styles. As OpenAI specified on their blog “ the results researchers got were impressive; there are recognizable chords and melodies and words”.  But how did OpenAI do it?  OpenAI’s engineers made use of Artificial Neural Networks(ANN) which are essentially machine learning algorithms used to identify patterns in images and languages. Similarly, it is used to identify patterns in audio, millions of songs, and their metadata is passed through these neural network algorithms from which new music is created. In other words, the engineers have provided the AI computer with a huge database of songs and then ordered the computer to create new tracks that follow the same patterns and beats found in the songs database given to them.  Creating tracks that resemble the provided samples requires a lot of computing power. The AI computer has to go intensive training with large amounts of data. According to the OpenAI team, to train the model, the team had created a new dataset of 1.2 million songs, from which 600,000 of them in English, paired with their lyrics and data which includes genre, artist, and year of the songs.  Technical Details of Training Model – For those of you who are into ML engineering. Others can skip of course 🙂  ▪ The model on which the AI was trained on had two million variables running on more than 250 graphic processing units for three days.  ▪ The sampling sub-model which adds loops and transitions to track was also composed of one billion parameters and was trained on about 120 graphic processing units for many weeks.  ▪ The top hierarchy of the output track has more than five billion parameters and is trained on more than 500 GPUs.  ▪ The lyrics, which are being outputted from Jukebox, had also gone through an intensive training of two weeks.  ▪ The model is trained on 32-bit, 44.1 kHz raw audio using Vector Quantized Variational Auto Encoder (VQ-VAE). since generating music from other audio formats takes more time because of the long sequences.  The training model and code are available in the openai/jukebox GitHub repo.  Limitations of Training model There was a significant gap between music created by the Jukebox neural network and human-created music. Jukebox created songs showed similarity with plenty of features such as coherence, solos, and older instrument patterns, but they lacked choruses and structure which is repeated in a song. Sampling of the tracks produced noise which degraded the overall quality of the track. Performance of the training model was also not up to the par, On average it takes about 9 hours to fully output a minute of audio using training models, which can be a bottleneck when rendering and delivering audio samples on cloud platforms. Lastly, the model only produced English tracks since it is only being provided with a database of English songs. Samples and lyrics in other languages are not yet trained on the platform.  Legal and Ethical issues with such AI Models  Jukebox has many other issues when it comes to delivering a sample from the provided input. First, the copyright issue around training an AI on already recorded music will always require a copy of that track. Although this type of training is considered ‘fair use’. The second issue is the output, and this one can have serious consequences. Jukebox produces new tracks from existing metadata that are the lyrics and genre. What if those lyrics are protected by copyright?, What if the music is ‘in another style of the genre’ created a different appearance of the original singer in front of their audiences.  In many areas of the music community, there can be many issues from the Jukebox platform, whether it’s on the basis of copyright infringement and/or decreasing the value of human-made music. With the issues of the Jukebox platform also comes the benefits from it music creators will be excited and curious about Jukebox: how they can implement this creative AI tech in their workflow.  All these opinions and questions are completely natural. They always come at the cost of the latest tech innovations, Is AI good or bad for humans? Well, It all depends. So the best option is to explore and understand the possibilities of what Jukebox technology is really capable of. Understanding the technology will not only help in forming reasoned opinions while decreasing real-time issues with the platform.  Conclusion  Overall, Jukebox represents a step forward in improving the musical quality of samples with new lyrics, thus providing the creators with more freedom to create music over the generations. The ability to change the output on the basis of artist, genre, and lyrics is one of the biggest strengths of Jukebox.  Also, This is not the first music AI tool that San Francisco-based AI laboratory has delivered. OpenAI has been working on generating automatic audio samples conditioned on different kinds of metadata for many years now. Last year, they brought MuseNet, which was trained with a large amount of MIDI data using deep neural networks to compose new tracks with different instruments and genres from country to pop to rock.  Looking to Leverage AI in your organization? Reach out to us here Interested in joining our ML Team? Please check out the open positions here
May 29, 2020

Healthcare 2.0 – Post Corona Plot

It was only a few months ago that the healthcare industry around the world was undeterred. It was unswayed by any external factors, lest by an invisible virus. This one deadly virus, however, has changed how we look at health care now. Apart from economies, service sectors, and other industries, healthcare been the hardest hit and it has been at the forefront of this battle.  Social distancing, isolation, quarantine, and self-sufficiency have become the new norm of the day. Countries are shifting focus from imports and exports towards becoming self-sufficient. Work cultures are changing. Companies are not only allowing professionals to work from home but in fact office parties have gone online as well. Considering all of this, industries across the spectrum should mutate, just like the virus that is forcing us to do.  Since it looks like the virus is here to stay for some time until a vaccine has been developed, the only way is to build a life around it. Building business models that are COVID-19 centric are one way to go. This would provide long term solutions instead of mere shortcuts. This would also provide a framework for times to come. But, not everything is as hunky-dory as it sounds. Changing business models overnight and adjusting to the post corona scenario is easier said than done. Especially in an industry as regulated as healthcare. The one solution that healthcare has come up with to deal with these uncertain times is by going digital. This is being done through telemedicine. Some companies doing really well in this arena are Beam, GYANT, and Hale Health among so many others. What is Telemedicine?  Ever since the turn of the century, every industry has gone through structural changes. The invention of the internet and the advent of technology are the main sources behind such changes. And, it hasn’t left the healthcare industry untouched as well. The result of that structural change in healthcare is telemedicine. Telemedicine is a term used in the healthcare industry which refers to conducting medical activities and healthcare-related services using electronic information and telecommunication technologies. Even though the usage of telemedicine as of now is relatively less, the industry is projected to grow to $130 billion by 2025. Given the changing dynamics in the world, thanks to COVID-19, it looks like the projection of $130 billion will reach us far before 2025.  Instead of asking what is telemedicine, the real question should be why telemedicine? Telemedicine can help politicians fulfill their dream of an affordable healthcare system that they promise almost every year. The importance of telemedicine today lies in the spirit of social distancing and hygiene. For one thing, isolation can be maintained between doctors and patients, not just in spirit but in reality. An estimated $2.9 trillion is spent by healthcare in the USA alone. Almost $250 billion of it goes into unnecessary spending. With a little up-gradation of technological infrastructure, healthcare businesses can save plenty. This can lead to a lot of savings in employee housing and office maintenance as well.A proper and an error-free track record of patient’s data can be maintained with the help of different software integrated with this technology. Service Quality is the most important factor in a service industry like healthcare. The concept of telemedicine may also elevate service quality in this sector. Telemedicine also offers easy accessibility to the patients.  Patients are in general scared of going to hospitals and even more so in these Covid times. Telemedicine could address this shortcoming as the need to go to the hospital gets eliminated completely.  Patients and doctors consult over video calls and the required tests are prescribed. This has also led to increased efficiency in sample collections. Samples are being collected from the doorstep and the reports are getting delivered online. Think about the amount of paper, fuel, and time that this will also save in turn!  Companies will also return to strategic stockpiling of pharmaceutical commodities as well as essential goods. Strategic stockpiling was prevalent during the Cold War and immediately after the oil shock in 1973. Companies eventually gave up this practice due to the heavy costs incurred in holding such large amounts of excess inventory. However, it will be done again, considering the shortage of goods and the losses incurred at the onset of the pandemic. Once scientists find a cure to this deadly virus, the focus will shift to vaccine research. Funds are expected to flow in this direction which was otherwise just funded by philanthropists. Vaccine research will become mainstream as experts predict that this may just be the beginning. Apart from all these changes, post the pandemic, it is high time that a Universal Health Care Scheme is brought in place. The world should swiftly move in this direction to make healthcare easily accessible to all. Let alone the poorer countries, a superpower like US was unable to efficiently manage the situation for its population. Having a UHC scheme has been in discussions in the UN General Assembly and the WHO but for some reason it received all the due attention.  Post this pandemic, you might also walk into your doctor’s cabin and find his or her new assistant to be a robot. This robot will have all your medical history ready at it fingertips.  This integration of human intelligence with artificial intelligence and machine learning is meant to enrich the consultation.  Doctors could directly access patient records and medication history instead of the patient telling it all over again. This reduces the time that gets wasted and instead the patient can directly jump to explaining his problem. It could further be improved to reduce the strain that the healthcare system currently faces. Now, what are some of the challenges that the healthcare industry faces when it comes to implementing all that has been discussed above? Lack of training and infrastructure is the immediate challenge in adopting telemedicine, telehealth as well as the integration of AI, ML and human intelligence. This might not be a long term problem, but our healthcare personnel aren’t quite trained for the required digital infrastructure for telehealth. The next is the lack of human touch. Although telemedicine is extremely accessible and affordable, many patients fear the lack of human sensitivity. There is a lack of trust in technology. People may not get a ‘feel’ of their regular doctor’s appointment.  Once most of healthcare goes digital, blockchain technology can be used for functions such as record management, healthcare surveillance and monitoring epidemics. Since information once entered cannot be manipulated, transparency, and patient data security can be ensured by implementing blockchain into the healthcare system on a large scale. But, as mentioned in the beginning of this article, change doesn’t happen overnight. It takes time for people to adapt.  The future of the healthcare industry lies in telemedicine. While there are both pros and cons associated with it, the need is for proper regulations and COVID centric as well as long term policies. Once done, there is no doubt that we can create a universally accessible and affordable healthcare system with no boundaries for anyone.
May 19, 2020

How Blockchain is Driving New Opportunities in the Industrial Sector?

Blockchain technology was introduced in the year 2008 by an unknown group of people. This technology was first outlined in 1991 by Stuart Haber and W. Scott Stornetta. The idea behind the technology was to create a system where document timestamps could not be tampered with. The technology suddenly came under the limelight as the backbone behind Bitcoin. This was when Bitcoin prices suddenly soared in 2017. Since then researchers, bankers, industrialists have been exploring all possible avenues to find new use cases.  While some may say that the Blockchain was just a craze and is fading away, there is a whole other portion of the population that says otherwise. Many have been looking for ways Blockchain can be used to optimize business processes, reduce operational costs and enhance security.  Even though Blockchain may have its drawbacks like most other technologies, it surely is a fantastic technology. It has opened numerous opportunities for almost every business. The deeper you dig, the more and more use cases surface. What is Blockchain? For those who might need a bit of revision, Blockchain is a digital, open, immutable ledger that records transactions between two parties chronologically. It consists of a constantly growing list of records called blocks that are linked using cryptography.  The value is exchanged over the internet without an intermediary and any particular block cannot be changed without affecting the previous blocks. Any data once created cannot be deleted. Since Blockchain came into the limelight in 2017, industry experts have been trying to identify ways in which they can capitalize on the Blockchain technology. It can be used to increase efficiency, transparency, security and reduce costs in their respective fields.  Right from banking, infrastructure, transport, digital, supply chain, cloud storage to cloud computing, Blockchain has found its use in many more industries.Regardless of the size, every organization is readily adopting this technology. If you haven’t entered this race yet, it’s time to get in touch with a software technology partner. Still not sure if you can implement blockchain within your industry? It’s best to continue reading. Blockchain Industry Applications 1. Supply Chain Management The industrial sector which comprises construction and manufacturing can utilize this technology for numerous purposes. As the supply chain forms an important part of the sector, use cases can be plenty. It can be used to increase accountability, traceability and even reduce the mileage frauds that are committed by reporting longer delivery routes than the actual. Using Blockchain record-keeping solutions, all supply chain stakeholders can store and share all documents like bills, contracts, etc. securely and maintain transparency among all players.  This not only improves the security but also reduces the efforts and time that goes into the documentation.  These documents can be accessed by all the players involved in the process like shippers, freight forwarders, ports, ocean carriers. Paired with IoT and AI, Blockchain can be used to track shipments, containers, and all forms of certifications associated with products. It thereby provides a digital passport for every product which in turn indicates its authenticity and prevents the sale of fake goods. Since there are several players in the supply chain, each player can access the Blockchain to extract information relevant to them using different software solutions. This includes the buyer who can scan a code and access the information on the Blockchain to check every step of the production process. Luxury goods manufacturers see vast scope in this arena as the technology can be used to track the history of ownership and also verify claims of authenticity by tracking the procedure. An additional use that Blockchain finds in supply chain management is temperature-controlled transportation and tamper-proof storing. Are you thinking about implementing blockchain in the supply chain of your organization? Check out the list of blockchain enterprise solutions we provide. 2. Manufacturing By improving supplier order accuracy, shipment traceability, product quality, and delivery schedules, manufacturers will be able to produce and deliver more and in turn, sell more.  Since all the data is cryptographically stored, anyone outside the system cannot change anything in it. All the data is visible to everyone in the network. One of the most important uses of Blockchain in manufacturing is “smart contracts”. Smart contracts eliminate the intermediaries and facilitate immediate payments. 3. 3-D printing  Blockchain can also be used with 3-D printing to allow customers to decide when and where they want to produce something. This can be done by combining both the technologies to enable a point-of-use and time-of-use supply chain. As several designers work together on a particular design file from different locations, having a blockchain database reduces the confusion and acts as a cloud.   3-D printing also requires highly powered computers in factories for calculations. The load is usually on one computer. Blockchain allows one to break down the calculations in a chain over several other computers thus reducing the load and speeding up the process through numerous other systems.  The security factor of blockchains allows secure sharing of sensitive data including the necessary printing parameters. Blockchain can be used to securely transfer the data to a verified 3D printer exactly where they are needed. This will save inventory, imports, and logistic costs.  After production, the parts can be authenticated helping customers verify whether the products were counterfeited. This can be especially beneficial in manufacturing military equipment, airplanes, etc. These arenas not only require large parts and spare parts but require all the parts to be genuine. 4. Real Estate This disruptive technology is also finding its use in construction by automating contractual processes and paperwork underpinning complex projects.  Real estate companies in Amsterdam are applying Blockchain to real estate projects in the city’s harbor. This is being done to set up a Blockchain-enabled project management system. It will also make the building development life cycle more efficient.  The emphasis is on recording transactions at binding moments where accuracy and audit trails are crucial.  A California based Blockchain firm is demonstrating the idea of making all the information about a project available to the owner in a Blockchain ledger. This would also be one of the deliverables of the project.  Any refurbishments to the building can also be documented. The whole repository can be transferred to new owners if the property goes up for sale. The use of Blockchain in the industrial sector looks very lucrative. Further study of the sector can reveal many more use cases. But when it comes to implementation of these opportunities, a thorough evaluation needs to be done.  It must be determined whether implementing blockchain technology will actually be beneficial to the organization and the sector. Otherwise, it might just be a costly investment with very low returns. Are you also planning to introduce blockchain technology to your business? Share your requirements with us at contactus@tipstat.com and learn how we can help.
May 11, 2020

How to Manage Remote Teams? (A List of Productivity Tools)

Remote teams, virtual teams, or work from home have become buzzwords these days especially during this pandemic of Covid-19. Even before, in this global economy, geographically dispersed remote teams were quite prevalent. This could be attributed to the numerous benefits that working remotely offers to the employers as well as the employees. For employers, having a remote team could mean lower investment in real estate, access to a larger talent pool, lower absenteeism, lower salaries, and saving on equipment amongst other benefits. Whereas for employees, working remotely provides flexibility, increased savings, greater freedom, and an increased sense of wellness. Employers can either have their teams in a different country or they could have their employees working from the comfort of their homes. Whichever they choose, having virtual teams comes with some inherent challenges. Some of the challenges while working remotely: Physical distance creates social distance, causing a feeling of isolation. Employees in different locations may not be familiar with each other, creating a feeling of hostility.Different time zones make working together on a project cumbersome.Language and cultural barriers make communication difficult.Struggling to track employee performance.Addressing the virtual teams jointly.Conflict resolution from a distance. Numerous companies have come up with quite innovative solutions to these problems and effectively manage remote teams. It still completely depends on how the managers implement these solutions to benefit their employees.  We have always been placed well in terms of remote work even before the pandemic occurred. We make use of a lot of the G Suite services to integrate the work that our team members do. Since most of these applications are easy to use, flow of work and communication do not get disrupted. We use different tools for different tasks, for example, Bitbucket is used for code management and repositories, Pipedrive for sales CRM and open-source project management software like KanBoard. We utilize Google hangouts extensively for all conversations that do not need to be documented. Google Drive is another tool that our team uses to share documents and access them from their desired location. Regardless of the tools we use or will use in the future to manage the productivity of our employees, we do not believe in using the “number of hours spent on work” as a performance yardstick. We do not encourage the use of time tracking tools and allow teams to complete tasks at their own pace within deadlines. That’s how we build trust with our employees and deliver innovative software solutions to businesses. Let us have a look at some other tools that companies around the globe are using to enhance productivity and manage remote teams. For each type of challenge, there are numerous solutions and tools available. 1. Project Management Tools Whether working remotely or from a co-located office space, every employee is a part of a team assigned to a specific project. While several departments work together to complete a single project, virtual teams may find it difficult to collaborate. Project management tools like Basecamp, Trello, Jira, Instagantt, Airtable, and ProofHub provide a one-stop solution for managing such projects from one spot. Right from Kanban boards, custom workflows, scheduling, task management, Gantt charts, managing documents and files to assigning work within the team, these project management applications make work much simpler. 2. Team Collaboration Tools Working remotely means that almost all communication happens over mail. This can overwhelm one unless his or her skills in organizing are on point. Often important messages get buried under loads of messages. The project management tools mentioned above can help you organize your work messages automatically. Along with them, using tools like Slack and Troop Messenger helps employees communicate easily by bringing all communication to one place. While Zoom is quite known for video conferences and calls, some other applications like Nextiva, Whereby (formerly appear.in), Uberconference allow employees to text, video call, attend video conferences, and share screens. Zappy is another solution that even allows you to record screens which may be required for several purposes. 3. Time Converter Tools Scheduling becomes a problem when time zones of remote employees are different. Employees often receive replies to their messages the next day due to the time difference. This causes a lot of delays and a broken flow of work. Tools like World Time Buddy, timezone.io, and 10 to 8 remedy this by mapping all the time zones employees are in. They also help you organize meetings at a time convenient for all. 4. Cloud Storage Tools Often the same bit of data needs to be accessible by multiple team members. Sending it to and fro becomes tedious and painstaking. To address this, employees can just upload the files to clouds like Dropbox, Google Drive and allow access to the respective members. Life becomes much simpler. 5. Note-Taking Apps The best note-taking apps out there have always been Evernote and Microsoft One Note. Another application named MindMeister allows people to visualize, present, and share their thoughts in the form of mind maps via the cloud. These can come handy during team meetings and also while working on a task. 6. To-Do-List Managers While almost all of us use the Memo app and sticky notes to create to-do-lists, Todoist is the best task planner according to The Verge. While project management solutions also offer this feature, there are some more apps like TickTick and Google Tasks. These apps let you organize your tasks, integrate them with your calendar, and also collaborate common tasks with teammates. 7. Remote Desktop Accessing Tools In some organizations, before making the shift to remote work (suddenly due to the pandemic) work had to be done only on office computer systems due to which all the work stayed in the office. Such offices needed to bring work home to employees. This has been facilitated by some software systems like Microsoft Remote Desktop Client, TeamViewer, Chrome remote desktop, and Apple Desktop. Employees can securely access all their files, resources, and apps from a remote PC. 8. Security and Encryption Tools Data is the most expensive asset for almost every organization. Having virtual teams means that all the data travels over the internet making it susceptible to breaches. There are some common sense security practices that every organization must practise like installing antivirus applications or multi-factor authentication where users are verified using options like fingerprints or SMS code verification. Opting for a VPN is also a good option to consider while sharing data over a public network. Apart from these, there are some other tools that could make managing the security of data much simpler. 1Password and LastPass are password managers that act as digital vaults to store sensitive information like passwords, software licenses, and other credentials. BackBlaze is a low-cost cloud backup and storage service that lets you safely back up your work online. You can also compress your files using the powerful 7Zip application to safely send them over the internet. You can sign and send your document securely to another team member using Adobe Sign. 9. Employee Motivation Tools Employee morale may suffer even when the teams are virtual and people may not get credit for their work as well. Some apps like WooBoard allow you to create custom employee recognition programs. Chimp or Champ is another app that collects employee responses anonymously to check the happiness meter and team’s feelings. iDoneThis is an application that allows managers to see who is responsible for which piece of work. 10. Productivity Enhancement Tools All work and no play makes Jack a dull boy. Employers and managers need to ensure that their team is not overworked or is working healthily. Take A Break, Please is an app that forces users to take a break after certain durations. As humorous as it sounds, there exists an app called Bartender to help employees organize their menu bar options to keep it tidy and organized. As music also improves productivity, some apps allow users to play music in the background. While there is no common consensus on which is the best app out there, Spotify and Gaana are some common choices. Disconnecting from work and so many apps is also one of the most important steps to ensure employees don’t burn out. Surprisingly, there is an app for that as well! While some can disconnect just by turning off their laptops, some can make use of the app called Headspace which helps you practice mindfulness and meditation. Apart from using these tools and applications, there are some common guidelines that employers need to follow on “How to manage remote teams?” Be precise and set clear expectations for your employees.Keep in touch on a periodic but regular basis.Create a communication strategy.Know the language and the culture of your remote employees.Trust your team even though you need to have policies and regulations in place.Recruit people who blend into your organization’s culture and values.Connect employee goals with the organization.Leverage technology. No system works in isolation. No size fits all. Managers need to carefully analyze their organization, its culture, objectives, employee needs and combine all of the tools and tips mentioned above to come up with a mix that suits them the best to manage remote teams, whether their team works from home or in a different country.
Apr 30, 2020

Scientists at Intel Create a Neuromorphic Chip that can Smell

While humans are competing with machines trying to become as efficient as them, it looks like computers are becoming more and more like humans. With Intel’s latest invention, computers will now be able to even smell just as human beings do. This development has been possible due to the joint efforts of a senior research scientist at Intel’s Lab, Nabil Imam, and olfactory neurologists at Cornell University. Together, they have mimicked the sense of smell on Intel’s neuromorphic computing chip called Loihi. Intel developed Loihi in November 2017 in order to emulate the neural structure and operation of the human brain i.e. neuromorphic computing. Loihi fulfilled the functional requirements that were needed to implement SNN. It had a 128-core design whose architecture was suited for  Spiking Neural Network (SNN) algorithms. SNN was basically the way in which computational building blocks were arranged to emulate neural circuits.  The chip has about 130,000 silicon neurons and 130 million synapses connected. In contrast to older forms of AI, SNN algorithms require low maintenance and much lesser training. The SNN algorithms facilitate continuous learning in unstructured environments, high performance and less power consumption. These qualities get passed on to Loihi automatically. You must still be thinking, “but, how can a machine smell?”. In order to understand it better, let us first look at how the brain identifies different smells. When we inhale air, it generally contains different odor molecules. These molecules bind themselves to the receptors in the nose called olfactory receptors. The receptors which extend into the olfactory bulb, immediately send signals to the brain’s olfactory system. An interconnected group of neurons in the brain then generates electrical pulses thereby leading to a sense of smell.  How Does the Neuromorphic Chip Work? Scientists at Cornell University have studied the olfactory system in mammals as well as animals when they smell different odors. They measured the electrical pulses generated in their brains during the process. The scientists derived a mathematical algorithm based on the neural circuit diagrams and electrical activity. This algorithm was then configured on Intel’s neuromorphic chip. Even though neuromorphic computing is a relatively newer field of AI, owing to SNN, it requires much lesser training. In the older forms of AI, the systems had to be trained repeatedly. The previous data would get disturbed if something new was fed into the system. However, the key to this new system is its neuromorphic structure which represents the neural circuitry in mammals. It facilitates continuous learning based on the stimuli it receives in unstructured environments.  The neuromorphic AI system learns about a particular smell and never has to be reminded again. Just like a human brain, it adds newer smells to its memory once and for all without disturbing the previously learned smells. While human brains can associate memories with different odors or even cross-reference different ones, Intel’s project has a long way to go. The scientists placed 72 chemical sensors in a wind tunnel and exposed them to 10 different odors. Another traditional AI system was also tested alongside to compare both. The responses of the sensors to the smells were sent to Loihi. The silicon circuitry in the chip then mimicked the neural circuitry in the brain that leads to the sense of smell. The neuromorphic chip identified the smells 92 percent of the times whereas the traditional system was accurate 52 percent times. This newer AI was also able to identify a particular aroma even when it had been mixed with other odors. Like every other invention, this relatively newer AI also has its shortcomings. Even though the system can identify previously learned smells, it may often get confused with smells that fall under the same category, like oranges from Brazil or oranges for India. Scope of Neuromorphic Computing in the Future In spite of the system being at a relatively new prototype stage, it can be developed further to be used in several areas. It can be used to smell explosives or weapons at the airport security, identify a gas leak in factories, and can even tell you which gas it is. It can tell factory owners when a particular emission level has been crossed.  “Your smoke and carbon monoxide detectors at home use sensors to detect odors but they cannot distinguish between them; they beep when they detect harmful molecules in the air but are unable to categorize them in intelligent ways,” says Imam. This advancement in neuromorphic computing can also be used to improve the medical diagnosis of diseases that can be identified through smell. Some diseases like cancer or Parkinson’s disease may be diagnosed in their early stages. Symptoms of these diseases sometimes emit a peculiar odor and can sometimes be detected by people with a heightened sense of smell. This can be replaced with AI. Nabil Imam says that he is now planning on expanding the underlying concept to approach real-world problems. According to him, “Understanding how the brain’s neural circuits solve these complex computational problems will provide important clues for designing efficient and robust machine intelligence.”
Apr 24, 2020

How is Machine Learning Improving our Social Media Experience?

Are you on Instagram? If yes, have you followed Lil Miquela yet? If not, then you could just probably check out her page. The reason you are being asked to do so is that Lil Miquela is no human being. She is a virtual influencer created using AI by Trevor McFedries and Sara DeCou. Not only she dresses like a human, but she also has a personality like one and behaves like one too. She even supports Black Lives Matter, LGBTQ causes, and reproductive rights. She is even talking about the COVID-19 these days! Lil Miquela is just one of the creations of AI. There are lots of misconceptions behind what Artificial Intelligence is, which needs to be cleared first. AI is an area of computer science that emphasizes the creation of machines that are intelligent enough to act like humans. Often people use the terms Machine Learning and Artificial Intelligence interchangeably but in fact, Machine Learning is a subset of AI. It is a branch of AI that is based on the idea that after basic human programming, systems can learn from the large amounts of data that is available, identify patterns and make decisions. Some people often say that they haven’t come across AI yet and often equate AI and ML with robots. This couldn’t be further from the truth. Each one of us interacts with some or the other form of AI and ML every day. For this discussion let us focus on ML today. In our daily lives, we come across different forms of ML every time we interact with navigation systems like Google maps which not only show you the way but even predict traffic for you, virtual assistants like Siri that converse with you, chatbots on different websites that help you with your queries and so many more. Almost every one of us is present on some form of social media and most of us would have missed the obvious instances where we interact with ML on regularly. You can even find machine learning in social media. Social media is used by millions of users every day and millions of posts and pictures are uploaded daily. Monitoring this kind of data is an impossible task and there isn’t enough manpower to do so. Businesses are trying their best to leverage the power of social media and are trying to track, analyze, and reach their customers through social media. Without technologies backed by Data Science like Machine Learning, data analytics, and business intelligence, it is impossible to handle the oceans of data that are available so easily today. You name the social media platform and you will see that each of them has implemented ML to make the user experience better and also to serve their business goals better. Facebook has implemented ML in several functions to make the platform a much more user-friendly one than earlier. Facebook has employed ML to tackle the enormous problem of fake news and hate speech on its platform. With people working from home due to the pandemic of COVID-19, Facebook, Twitter and Instagram have all moved to machine learning to remove any form of harmful content. The one problem that machine learning faces is the power to reason like humans. So, even though social media giants have employed ML for content moderation, none of the content gets removed unless verified by humans. This means that there is still room for error while employing AI and ML. On Facebook, ML can analyze the content of videos, caption them, and even translate posts in different languages so that people all over the world can access the platform more freely. When users upload photos they are automatically prompted to tag their friends and it happens precisely due to the face recognition technology. This, however, presents a privacy threat for the future. If Facebook can identify faces of people based on some pictures uploaded on the platform, soon it will possible to point your camera at someone and identify them on Facebook. Facebook also allows users to convert panoramic pictures into 360-degree photos. Machine learning enables Twitter to drive engagement and show the most relevant tweets in the users feeds at the top which was in reverse chronological order earlier. This is done using algorithms that score thousands of posts for each user. It has also deployed ML to fight against harmful content and at the beginning of 2017, Twitter used AI to shut down nearly 300,000 accounts that were identified as terrorist accounts. The platform even uses ML to crop images to make them appear more appealing and it also uses the technology to identify the content of the videos, thereby categorizing them enabling a much easier search. The other social media platform that is extensively used by people as well as businesses, Instagram is not behind in the use of AI and ML to optimize the user experience and make it a much secure platform. To be able to correctly recommend relevant stories and posts, it uses “word embedding” to study which words are more likely to appear next to each other and thereby recommend posts based on similar accounts rather than based on content previously watched. The algorithm even takes into account which posts the user does not like to see and tailors the users’ “Explore” tab accordingly. Businesses leverage Instagram’s use of ML to identify potential influencers and ensure they reach the relevant target groups. Using ML, Pinterest identifies the subject of images, studies the visual patterns and matches them to other images. Studying the pins that users have saved, the platform recommends similar images to them. Youtube does the same by analyzing what a particular video is about and recommends similar videos based on the metadata associated with every video. Machine learning is on the way of revolutionizing social networking app development. The most common function we can see though is content moderation. The algorithms behind each application need to written once, post which the systems learn from the large amounts of data that they are fed. The question that arises now is that “how can ML revolutionize social media applications?” It is evident from the uses mentioned above that ML can be used not only for the benefit of people but it can be used for destruction as well. Invasion of privacy is one of the major concerns, in turn reducing their safety and security. Since ML is extensively used for content moderation, it could probably be used to somehow track those malicious conversations happening somewhere on the internet. Already used for blocking trolls, the technology could be used to make the conversations on posts healthier and prevent plagiarism of original posts. Child pornography or removal of content posted without consent could be another application. Using face recognition to identify faces that may have been recorded on surveillance systems for breaking the law can also help catch those who are absconding. Social media in some form or the other is one vast place where everyone is present. Leveraging the power of ML to either optimize the experience to solve problems would require the same amount of effort required to cause destruction. The decision and choice lie with those writing the algorithms behind the scenes and making it happen.