Stefan Leadbeater

Stefan Leadbeater

I joined LUCA, the Big Data specialists of Telefónica in March 2019 as a Digital Marketing Intern in the Marketing and Communications team. I am in my third year of study at the University of Bath reading International Management with Spanish. I work closely with our various social media platforms as well as communicating Big Data and AI news through my weekly blogs.
AI & Data
How AI is revolutionising the Classical Music industry: An analysis of the musical AI by Aiva Technologies
Previous posts by LUCA have touched upon the increasing use of AI in the music industry. Today however, I would like to focus on its use in the area of classical music. Over previous years many different companies have developed AI technologies with the ability to compose classical music, without human intervention such as ‘Iamus’ created by the University of Málaga, however, the one which has stood out above its competitors is Aiva Technologies, one of the leading start-ups in the field of AI music composition. Aiva Technologies was founded in Luxembourg and London by Pierre Barreau, Denis Shtefan, Arnaud Decker, and Vincent Barreau and the AI which they have created is called ‘Aiva’ (Artificial Intelligence Virtual Artist). In 2017, the company was invited to participate in the European Film Market in Berlin, a highly acclaimed event as well as the Artificial Intelligence in Business 6 Entrepreneurship (AIBE) Summit in London. On top of this, having released an album titled Genesis and many other single tracks, Aiva became the first AI in the world to officially receive the status of Composer and became registered under France and Luxembourg’s authors’ right society (SACEM), where its music can now be copyrighted to its own name. The following video is a TED talk given by Pierre Barreau, one of the founders of Aiva giving a more detailed explanation of the expansive capabilities of the AI technology. https://www.youtube.com/watch?v=wYb3Wimn01s How does the technology work? The technology behind this innovative AI is based on deep learning algorithms that use reinforcement learning techniques. Deep learning is a particular type of machine learning in which multiple layers of “neural networks” are programmed to process information between various input and output points. What this does is allows the AI to be able to understand and model high-level abstractions in data, such as melodic patterns. As mentioned, reinforcement learning is also used with this AI technology. Before describing what exactly this form of learning is, it is important to understand the purpose of its name. In psychology, reinforcement is used to motivate and encourage future behaviours using stimulation and motivation. This is done through the use of positive and negative reinforcement through the use of rewards and penalties. It is a machine learning technique which teaches an AI to decide on the avenue to go down and which decisions to make in order to arrive at a set objective. Reinforcement learning is an approach to machine learning which is inspired by behavioural psychology. It differs from supervised learning in that the algorithm is not explicitly told how to perform a task but works through the problem by itself and ‘finds its own way’. At the centre of reinforcement learning sits the use of reward signals which are awarded upon performing specific tasks and help to navigate the algorithms towards a better understanding of what the right and wrong courses of action are. Positive rewards signals are given to encourage the continuation of a performance of a particular sequence of action, whereas negative reward signals act as a penalty for performing activities which are not in line with reaching the final objective. This in turn helps the algorithm to correct itself in order to stop receiving these penalties. The Aiva team have stated: “We have taught a deep neural network to understand the art of music composition by reading through a large database of classical pieces written by the most famous composers (Bach, Beethoven, Mozart, etc). Aiva is capable of capturing concepts of music theory just by doing this acquisition of existing musical works”. However, it is important to remember that this technology, although making huge leaps forward, is still in its infancy. Having said that, this year, Aiva Technologies opened their AI music composition platform for early beta testers and even though the track customisation is still quite minimal at this stage, the current AI musical styles offered by Aiva include: Modern Cinematic, 20th Century Cinematic, Sea Shanty, Tango, Chinese and Pop/Rock. Along with this exciting release, this video shown below was released showing just how far the algorithms have come: https://www.youtube.com/watch?v=3ObkDWmKEEA Regarding the future of Aiva, the team is planning to eventually teach it how to learn any style of music. However, there is a challenge which is presented by modern music which is not anything to do with its composition, but about the instrumentation and sound design. Even though some of the works created by Aiva have been used in films, adverts and even game studios, instruments, as we all know, have their own specific sounds and unique qualities, only fully explored through being played by humans. As a result, trying to recreate these sounds and effects with an AI is posing, and will continue to pose a great challenge in reaching perfect human-level performance and really taking this type of AI to the next level. It is hard to know whether music created in this way will ever be indistinguishable from that of music composed and played by actual musicians. Having said this, testing has already begun using expert musicians to compare Aiva’s creation to music performed and recorded by actual musicians and so far they have not been able to tell the difference between the two.
July 26, 2019
AI & Data
How Big Data & Artificial Intelligence are having a positive impact in the sport of Rugby Union
The use of big data and data analytics is not something which has only just become apparent in sport, in fact it has been around for a long time. We can see various examples of its use, in sports such as Formula 1 where teams have used high speed analytics for a number of years, to try and gain a competitive advantage over competitors. Also, within football, the world’s biggest sport, Data Analytics has been used in order to monitor the performance of the players through mechanisms tracking a player’s movements and statistics during a game. The industry of data intelligence is an exploding industry as the value of the data is so high. Technologies mentioned above allowing monitoring and tracking of individual players like the number of touches on the ball or metres travelled. This type of information is then being used to aid management in important decision making such as squad selection or to establish where improvements need to be made. These technologies have existed for a long time but have lacked context meaning that their full potential could not be exploited. One particular sport in which this ‘boom’ is ever more present is in rugby, which is growing massively as a participation sport, with the 2015 Rugby World Cup attracting 480,000 fans generating over £250 million. In matches, both international and national, the rugby coaches, managers and support staff can be seen sitting high up in the stands with a wide variety of computers and technology monitoring all aspects of the match in great detail. All of this technology is needed because nowadays, all the leading rugby union clubs use this data to monitor fitness, to help prevent injuries and to track players’ movement through GPS. Regarding GPS tracking, players wear a small device which is sewn into the back of their jerseys allowing data to be collected on their heart rate and position on the field. This information is analysed by the support staff, who are reviewing the physiological demands placed on the players during the game in different situations and levels of fatigue of each playerm so as to aid in planning and executing effecting rehabilitation and injury prevention measures. The demands placed on players are different due to the nature of the position played within the team. The ‘forwards scrimmaging and performing many intensive tackles, whereas certain positions in the ‘backs’ such as the winger will include longer periods without the ball and more intense periods of sprinting. All of this information can be analysed allowing specific training plans to be put together for specific players allowing them to optimise their potential and maximise their work rate out on the field. With these devices also being used during training, the coaching staff can also see the levels of effort which are being put in during training sessions which in turn means that through monitoring the players’ thresholds, the support staff can manage the training sessions so as to keep the players fresh for the games. This is possible as real-time data is being monitored by specialists from the touch line on each of the training sessions, giving coaching staff live and accurate information, upon which they base their team selections. Sir Ian McGeechan has stated that up to 80% of the selection process is now driven by data with only about 20% left to the ‘gut feeling’ of the coaching staff which he states is also invaluable in the selection process. Application with concussions: As well as using data to analyse aspects of the game and to improve the performance of the team and the individual players, teams are utilising it against the growing problem of concussions in rugby, a very serious problem if not managed correctly. In 2018, concussions were the most reported upon match injury in the professional English game for the 6th season in a row, and the ever growing amount of evidence showing long-term damage of sports-related brain injuries has led World Rugby, the game’s global governing body, to search for more efficient and effective ways of identifying and treating concussions in the modern game, both during and post-match. The result of their work was the creation of the Head Injury Assessment (HIA) which consists of a series of checks to determine if a player has experienced a concussion and whether further treatment is required. There are 3 stages to this process: Match officials or doctors spot a suspected head impact on the pitch. If the medical professionals recognise symptoms of a concussion, then the player is immediately removed from the game. If the signs are unclear, a 10-minute substitution is made for tests to be done to determine if the player is fit to return to the game. Every player on a team undergoes a further test after the game has finished to check their neurological systems and to check their memory against a baseline previously taken so as to show if previous signs of a concussion were missed by officials. If signs do show up at this stage, a test is then undertaken after two night’s sleep using technology developed by CSx which collects neurocognitive information which can be reviewed by professionals to determine if a concussion has occurred. This data is then transferred using an API to the data analytics platform, Domo, where the various datasets can be joined up and further analysed. This system was tested during the 2015 Rugby World Cup in England and World Rugby are planning to use it again this year in the 2019 Rugby World Cup in Japan. Applications in recruitment: As proved by in 2016 by ASI Data Science, a British AI firm looking to revolutionise the world of sports transfers, they designed software to help analyse players and aid decision making on which players are bargains and therefore, the best pick to purchase for a team. They signed a deal with the British rugby union side London Irish to test out these revolutionary technologies. Whereas traditionally scouting involves looking through hours of videos, manually noting down abilities and weaknesses, however with this software, clubs can enter the name of a well-known player in the position for which they are scouting and it will find, through assessing match statistics, players with similar styles and abilities. It looks through around 100 different parameters taken from Opta, the sports data company covering every professional player around the world. Bias is removed by the clustering together of different types of players by the software making it easier for scouts and analysts to find players. In a sport where salary caps often limit the budgets of even the biggest clubs, this tool could prove invaluable to all types of sports clubs. Finally, I would like to talk about the RFU, (Rugby Football Union) whose job it is to firstly create successful winning English national teams, but one area in which AI is being used is within its second obligation, of increasing participation in the sport at a grassroots level. For the last 5 years, the RFU has had a working relationship with IBM who have helped them to transform their operations through technology, doing such things as creating player and teak databases and providing personalised interactions and important moments of players’ careers. However, they also help at a fan engagement level with the development and implementation of a Customer Relationship Management (CRM) system as supporters also need managing in such a global sport. They believe that all this will help to significantly maintain the growth of the sport and help it to increase the number of active players and supporters pushing for more and more games and events. Conclusion: Without a doubt, the use of Big Data & AI in the world of rugby has been, and will continue to be invaluable both for the coaching staff, but also for the players and their well-being. I believe that the future is bright in this field and we are going to see more and more examples of Big Data & AI applications in different aspects of both rugby and all other sports around the world.
July 19, 2019
AI & Data
AKQA uses AI to develop a completely new sport
As we already know, AI technologies are being used both within sports and to create new sports, but these tend to be very similar to sports which already exist with slight adjustments to the rulebook. The design agency AKQA has taken it upon itself to design a completely new and innovative sport with help from an AI built by them…this sport is called Speedgate! The rules of this game are relatively simple. It is a game of 3 seven-minute intervals with 6 players in each team which compete on a field consisting of 3 open-ended gates. In order to score, players must kick the ball through a central gate, which they may not pass through. Having done this, they may then proceed to score 2 points by putting the ball through either of the end gates. An extra point can be scored if a teammate catches the ball and kicks it back through the gate. As can be seen by the name Speedgate, this game is supposed to be played at a fast pace and as such, the ball cannot be still for more than 3 seconds or it is turned over to the other team. Figure 1, Speedgate rule sheet, ( Source) As AKQA didn’t want to conform to the norm and rewrite a ruleset in order to produce a new sport/game, their creation, Speedgate, appears to be a coming together of several different rule sets to create one innovative game. This is no surprise when we find out the AI which was used. The developers at AKQA trained a recurrent neural network and a deep convolutional generative adversarial network on over 400 sports. To train this neural network, it used NVIDIA Tesla GPUs and it was able to generate over 1,000 different sports ideas (as well as training it with 10,400 logos allowing the creation of their logo and slogan seen below) alongside rules and concepts for them. Having analysed the results and sifted out those ideas which did not meet their criteria, play-testing proved Speedgate to be the best option. This is because AKQA wanted their creation to be an ‘ideal package’ in terms of workout and difficulty level as well as being enjoyable for those playing it. Figure 2, Speedgate Logo and slogan, ( Source) Kathryn Webb, AI Practice Lead at AKQA, said, “ GPU technology helped us to condense training and generation phases down to a fraction of what they would’ve been. We would not have been able to achieve so many unique ML contributions in the project without that speed. It gave us more time to test, learn and adapt, and ultimately helped to produce the best final result. ” As of this moment, it is exciting to be able to say that Speedgate is very much a playable sport and is a ‘one-of-a-kind’ in its field. Having been born out of an experimental project for Design Week Portland Festival, the AKQA team is in talks with the Oregon Sports Authority to gain actual recognition as a real sport. There is even talk of a Speedgate league being created this summer allowing the public to participate in this new and exciting sport. Don't miss out on a single post. Subscribe to LUCA Data Speaks. You can also follow us on Twitter, YouTube and LinkedIn
May 7, 2019
AI & Data
DeepMindAI partners with Moorfields Eye Hospital to develop innovative sight-saving technology
DeepMind has partnered with Moorfields Eye Hospital to develop a diagnostic technology to bring to market, which can help to detect the early signs of sight-threatening conditions, which, through early detection, can usually be treated and often reversed. What technology is currently used? As is already known, Artificial Intelligence is already being used within this field with eye specialists using Optimal Coherence Tomography (OCT scans) to aid them in their diagnoses on a day-to-day basis. However, these scans have proven to have their limitations. These 3D scans provide a detailed image of the back of the eye. However, this requires highly trained, expert analysis to interpret and to make accurate diagnoses, which is in short supply and therefore, this is a time-consuming process. This can cause major delays in treatment which in some cases can be sight-threatening, if, for instance a bleed was to appear at the back of the eye and was untreated due to excess time waiting for OCT scan results. What is this new AI technology? According to the official report, the idea of this new technology is that it will analyse the ‘basic’ OCT scan extremely efficiently (taking just 30 seconds in one demonstration) and will identify from the images a wide range of eye diseases along with recommendations of how they should be treated and the urgency of each individual patient's situation. This is solving an issue long discussed within this sector; the ‘black box problem’. This is because for the first time, the AI is providing context to the professionals to aid in their final decision; it not only presents the findings but explains how it has arrived at this diagnosis and recommendation. This is in turn leaving room for the specialist to scrutinise the information which is provided to an appropriate level, so as to ensure the best treatment plan for each specific patient is reached. This is aided by the fact that the technology provides a confidence level of its findings in the form of a percentage which will determine the degree to which the specialist will need to question the result. How does the technology work? The technology uses two neural networks with an easily interpretable representation between them which has been split up into 5 distinct steps. It starts with the raw OCT scan with which the segmentation network uses a 3D U-Net architecture to translate the scan into a detailed tissue map, which includes 15 classes of detail including anatomy and pathology having been trained with 877 clinical OCT scans. Next the classification network analyses the tissue map and as a primary outcome provides one of 4 referral recommendations currently in use at Moorfields Eye Hospital: Urgent Semi-urgent Routine Observation only For the training of this classification network, DeepMind collated 14,884 OCT scans of 7621 patients sent to the hospital with symptoms suggestive of macular pathology (the macular being the central region of the retina at the posterior pole of the eye). The final step as previously mentioned includes providing possible diagnoses with a percentage confidence level allowing appropriate scrutiny of the result to take place when necessary. Have there been any challenges to overcome? A major challenge in the use of OCT image segmentation (step 2 of 5 mentioned above) is the presence of ambiguous regions where the true tissue type cannot be deduced from the image meaning that there are multiple plausible interpretations which exist. To overcome this, multiple instances of the segmentation network have been trained with each network instance creating a fully detailed segmentation map allowing multiple hypothesis to be given. As a result, doctors can analyse the evidence and select the most applicable hypothesis for each individual patient. Interestingly, the report states that in areas with clear image structures, these network instances may agree and provide similar results, and that videos can even be produced showing these results with further clarity. Testing and quality control: Achieving ‘expert performance on referral decisions’ is the main outcome which is hoped for with this project as an incorrect decision, in some cases, could lead to rapid sight-loss for a patient. Within the report made about this project, DeepMind has stated that to optimise quality, first they needed to define what a Gold Standard of service is. "This used information not available at the first patient visit and OCT scan, by examining the patient clinical records to determine the final diagnosis and optimal referral pathway in the light of that (subsequently obtained) information. Such a gold standard can only be obtained retrospectively" Not including the people used in the training data set, Gold Standard labels were acquired for 997 patients and these were the patients which were used for testing the framework. Each patient received a referral suggestion from the network as well as independent referral suggestions from 8 specialists, 4 of whom were retina specialists and 4 of whom were optometrists trained in the retina. Each of the 8 specialists provided 2 recommendations, one simply using the OCT scan and one using the OCT scan as well as the fundus imaging and clinical notes and these were given during separate sessions spaced at least two weeks apart. The performance of the framework matched the referrals of 2 of the retinal specialists and exceeded those of the other 6 specialists when they were simply using the OCT imaging. When they had full access to notes and fundus images their performance improved but this time the framework matched the performance of 5 of the specialists but still continued to outperform the remaining 3. Overall, the framework performed very well not making any serious clinical errors. As I have previously stated, the framework uses an ensemble of 5 segmentation model instances, but it also uses 5 classification model instances, and this is clinically proven to lead to more accurate results. In the testing process, it was found that when deriving a score for our framework and the experts as a weighted average of all wrong diagnoses, the framework achieved a lower average penalty point score than any of the experts. Generalising this technology? What is exciting about this two-stage framework is how the second network (classification network) is independent of the segmentation network. Therefore, to use it on another device, only the segmentation stage would need to be retrained so that the system knows how each tissue type appears on the scan image. This has been tested on different devices which perform OCT scans, including Spectralis, from Heidelberg Engineering, which is the second most used device type at Moorfields Eye Hospital. It achieved an erroneous referral suggestion in only 4 of the 116 cases tested, similar to the 5.5% error rate of device 1. A first step however in getting this technology to a more widespread audience, is its application in all Moorfields Hospitals in the UK for the first 5 years if and when it is launched, free of charge! As well as this, due to the fact it can be applied to a wider range of devices, it will be much cheaper for other organisations to apply the technology to their existing devices rather than having to purchase additional specialist equipment. In all, this could be a giant leap for many countries around the world, hugely increasing the number of people who could potentially benefit from this innovative technology rather than limiting itself to one device and therefore limiting who can use it. Overall, according to DeepMind, the initial 14 months of research and development have been very successful, and this technology is by no means ready to be launched into the marketplace. However, with more rigorous testing and technological developments, before long it is certain that we will be seeing this technology implemented nationally and possibly even internationally. We hope that with this framework as a basis, in the future, others will be able to build on this work, branching out into new areas of this sector now that the ‘Black Box’ problem has been identified and somewhat overcome. Additionally, looking outside the box, many have suggested that this technology could even be useful for training purposes as people would be able to use the technology to learn how to interpret images of the eye and how and where to look for aspects within the image creating cause for concern. It is said that this could be a revolutionary step forward in this field and that much like 5G will allow people in rural áreas around the world to gain Access to healthcare specialists, this AI technology will provide the fastes and most accurate diagnoses no matter where someone is situated in the world. You can also follow us on Twitter, YouTube and LinkedIn
April 22, 2019
AI & Data
Driverless cars...a reality or a distant dream?
More and more we are seeing different companies developing what they are calling driverless cars, which, in reality are nothing more than cars which have certain aspects of autonomous technology. To be a ‘driverless car’, the vehicle must be able to navigate to a set location, spot obstacles, and park, all without any human intervention. In reality, we have not reached this point and according to experts are still a few years away from this target. As of now, driverless car technology is very temperamental, and errors are still far too common to be able to safely rely on them. The eventual aim is to achieve artificial general intelligence where the technology can replicate the human driver without replicating human mistakes. Improving this situation is more important now than ever before, with ever increasing numbers of companies, both automobile and others, launching their own products with autonomous driving technology. Google wants to implement a service where the public can hail down autonomous cars on the street to take them wherever they desire, whereas we already regularly see Apple’s autonomous vehicles on the street. However, what is most staggering is that Uber, one of the UKs biggest taxi services is in talks with Waymo about adding some of its 62,000 autonomous minivan fleet to their taxi cars. One major problem is that in theory, machine learning will teach the technology the rules of the road and how to legally drive, but this technology does not have the judgement or the responses of actual human beings. As a result, it doesn´t always take into account what other drivers are doing or the ever-changing environment around it and therefore, AI decision-making is becoming more of an obstacle when it comes to gaining the trust of the public. Very common occurrences, ranging from safely allowing another vehicle to pass which may involve a lane change, to adapting to varying weather changes are proving to be challenges for various companies within this field. The weather is a particular worry for some, as human beings find driving in harsh conditions difficult at the best of times. We have already seen that the sensors on Google AI cars have been affected by rain as it can affect the LIDAR´s ability to read the road signs which shows just how vulnerable even the most advanced technology can be. Knowing all of this, Microsoft and researchers at MIT have combined forces on a project to fix the problem of these ‘virtual blind spots’ which lead to such errors as mentioned above. This project works by taking a scenario and comparing the actions of what humans would do with what the car’s own AI would do and where the human option is safer and more desirable the vehicle’s data is updated to act in the same kind of way during similar future occurrences. In effect, as stated by Ramya Ramakrishnan, an author of the report, “The idea is to use humans to bridge the gap between simulation and the real world”. The researchers states that over time, a ‘list’ is compiled of different situations and what the correct and incorrect reactions are in the hope that through machine learning, this AI technology can, over time, improve greatly. Although this technology has, as of yet, only been tested on video games and is still far from being implemented in vehicles used on actual public roads, it does show a lot of promise and steps forward in the right direction towards the end goal of fully autonomous vehicles. Looking into the future, it is now more of a reality that one day we will be able to sit in a self-driving car without fear of any accidents or damage to the vehicle and arrive safely to whatever our destination may be. This is a future which we can all look forward to! You can also follow us on Twitter, YouTube and LinkedIn
April 3, 2019