Pakistani-American Starts Defense-Focused AI Company
Amir Husain, a Pakistani-American AI expert, has started a defense-focused artificial intelligence company, according to builtinaustin.com. Amir Husain's AI company SparkCognition has been building artificial intelligence and machine learning (AI/ML) software for its various clients since 2013. After closing on a $100 million Series C round last year, the company claims to be “one of the most valuable startups in Texas and one of the most valuable AI startups in the United States.” Earlier this year, Amir Husain and his wife Zaib, both University of Texas at Austin alumni, donated $5 million to launch "Artificial Intelligence Institute" at their alma mater.
|Pakistani-American Couple Amir and Zaib Husain|
“We started to develop software capabilities for a variety of (Department of Defense) clients and partners in the defensive industry,” founder and CEO Amir Husain said in a virtual event announcing the new company, as reported by the Austin-American Statesman. “We invented AI-powered weapon systems, prototyped a few and secured patents for many more. We have learned rich lessons and identified the shortcomings that prevent us now, as a country, from taking the lead in this critical new area.”
The new defense-focused AI company has announced several members of its board of directors who have served in high-ranking positions in the United States government. Most prominent among them are retired US Marine Corps General John Allen, former Air Force Under Secretary Lisa Disbrow, retired Navy Admiral John M. Richardson and former Deputy Secretary of Defense Robert O. Work.
Artificial intelligence is now being seen as the future of modern warfare. AI-powered networked drone swarms have recently been successfully deployed in Azerbaijan-Armenia conflict and the Gulf region.
Defense analysts believe that Turkish and Israeli drones have helped Azerbaijan achieve decisive victory against Armenia. "Azerbaijan’s drones owned the battlefield in Nagorno-Karabakh — and showed future of warfare" says the Washington Post headline as tweeted by drone warfare expert Franz-Stefan Gady. In 2019, dozens of cheap drones were deployed against Abqaiq and Khurais oil fields to cut Saudi Aramco's production by half, according to multiple media reports. Saudi and US officials have blamed Iran for the destructive hit. This was the first time that cheap drone swarms loaded with explosives dodged sophisticated air defense systems to hit critical infrastructure targets in the history of warfare.
Amir Husain was born is Lahore, Pakistan in 1977. Husain enrolled in the Punjab Institute of Computer Science at the age of 15 and graduated from it two years later with a bachelor's degree in computer science. Amir is a serial entrepreneur. He has started and sold several companies. He started SparkCognition in 2013 with Michael Dell as its first investor.
South Asia Investor Review
NED Alum Raises $100 Million For FinTech Startup in Silicon Valley
Pakistani-Americans Among Top 5 Most Upwardly Mobile Ethnic Groups
NED Alum Raghib Husain Sells Silicon Valley Company for $7.5 Billion
Pakistan's Tech Exports Surge Past $1 Billion in FY 2018
NED Alum Naveed Sherwani Raises $50 Million For SiFive Silicon Valley Startup
OPEN Silicon Valley Forum 2017: Pakistani Entrepreneurs Conference
Pakistani-American's Tech Unicorn Files For IPO at $1.6 Billion Valuation
Pakistani-American Cofounders Sell Startup to Cisco for $610 million
Pakistani Brothers Spawned $20 Billion Security Software Industry
Pakistani-American Ashar Aziz's Fireeye Goes Public
Pakistani-American Pioneered 3D Technology in Orthodontics
Pakistani-Americans Enabling 2nd Machine Revolution
Pakistani-American Shahid Khan Richest South Asian in America
Two Pakistani-American Silicon Valley Techs Among Top 5 VC Deals
Pakistani-American's Game-Changing Vision
Get people familiar with current technologies and start building out our own national cloud which can support AI/ML services and development.
The scope of CENTAIC’s activities are not yet known to the public, but it may be researching into a number of key fields in AI, such as big data, machine learning, deep learning, predictive analytics and, potentially, natural language processing (NLP). Not only do each of these sub-fields apply to current and emerging air warfare applications – notably drone development – but the PAF will need to draw on these areas to drive its next-generation fighter aircraft (NGFA) aspirations. The creation of CENTAIC was inevitable.
The PAF had noted that CENTAIC could support the development of civilian applications. However, as the entity’s work matures in the way of aiding the PAF’s programs, CENTAIC may ultimately specialize in the domains directly relevant to the PAF. But the underlying infrastructure and knowledge/capacity creation could scale-out and result in sister-organizations – ideally both in the public and private sectors – to drive AI development for healthcare, manufacturing, resource management, and other areas.
However, in terms of specifically the PAF, the creation of CENTAIC could help the PAF indigenize many key – but non-tangible and less appreciable – inputs for its NGFA under Project Azm.
These inputs can range from algorithms for guidance systems for air-to-air and air-to-surface missiles, image processing for TV/IR seekers, sensor fusion, human-machine-interfaces (HMI), and other applications. While not as physically tangible as gas turbines or transmit-and-receive modules (TRM) for an active electronically scanned array (AESA) radar, AI research could eliminate barriers to outcomes that are simply unavailable to Pakistan.
Sensor fusion one such example. There is no off-the-shelf solution (independent of physical subsystems), so the PAF would either have to acquire it as-is from a willing supplier (which may carry limitations, such as requiring it to buy specific hardware), or develop it internally.
Likewise, Pakistan could also develop a myriad of intellectual property (IP) in AI, which may help it enter research and development (R&D) partnerships as an active contributor instead of a passive bystander. For a country that lacks industrial inputs, AI could emerge as an ‘equalizer’ for R&D growth.
AI is Essential for Future Air Combat Applications
In this article, we will refer to AI as an overarching field that includes the following sub-areas:
Big Data: This is the process of identifying trends, correlations, and other insights from disparate sources of information. The intended outcome of big data is to improve decision-making.
The PAF can draw data from a variety of sources, such as – among others – instrumentation equipment installed at the Sonmiani Weapon Testing Range (WTR), the Damage Tolerance Analysis/Structural Health Management (DTA/SHM) tools used at Pakistan Aeronautical Complex (PAC), or Air Combat Maneuvering Instrumentation (ACMI) pods, specific sensors within its aircraft, flight sorties, exercises, and other areas. Additionally, a little-known feature of the JF-17 is its heavily instrumented nature. As a result, PAF has thousands of hours of flight data from the JF-17. The establishment of CENTAIC may be driven by the desire to utilize this goldmine of flight data.
Machine Learning (ML): ML is a form of AI that tries ‘learning’ from past data and results so as to make better decisions in the future. ML involves the use of algorithms to understand the data (which may come from the sources we listed above) and, in turn, help the planner make an informed decision. While it may sound like common sense, seldom do humans have enough time to manually read and understand all of the facts of a situation. So, inevitably, humans could end up compromising or “going with their gut” in the absence of information. In contrast, ML can do the heavy-lifting analysis across gigabytes or terabytes of data quicker, so it leaves the human decision-maker with fewer blind spots or uncertainties. Examples of application of ML include electronic warfare, online emergency flight plan generation, flight-model learning from flight data, adaptive flight control systems, and online optimal guidance for munitions.
Deep Learning: This is a sub-field of ML. It involves running different programs through different layers of data to pull conclusions or insights from several sources. Basically, where ML would use algorithms to interpret the data and help a human decide, deep learning tailors algorithms into ‘artificial neural networks’ that could make decisions autonomously. In other words, deep learning can enable for drones that can work on their own with limited human input or control. It may even help with the development of new generation terminal-stage seekers meant to attack moving and/or hidden targets. Deep learning will enable the operation of drone swarms and may lead to decision making AIs in loyal wingman drones.
Predictive Analytics: This is basically the use of data to determine what is most likely to occur next. One good example is aircraft maintenance. The PAF can use its existing inventory/requisition logs as well as its sensors on-board aircraft to set-up predictive maintenance and preventative maintenance schedules. In turn, it can reduce long-term repair costs, cut down-time, increase availability rates, especially for the JF-17.
Natural Language Processing (NLP): NLP enables computers to communicate with people. This might not seem important in an aviation context today, but it will be in the future. For example, a single NGFA could generate vast amounts of data from a dozen major sensors. If, for example, an enemy aircraft is trying to ‘paint’ the NGFA by radar, the NGFA could alert the pilot with a specific light or screen. This may be doable if that was the only information the pilot needed, but the pilot may also want to know about the current status of his/her ‘loyal wingman’ drones, or when to send a loitering munition into a terminal stage. The pilot has limited time and energy to read a giant screen of alerts. With NLP, the NGFA could converse with the pilot, thereby cutting the risk of information overload.
Potential Marquee Air Warfare Applications
Though the projects at CENTAIC may be at their early stages, one can see the PAF direct its AI/ML efforts the following areas. These projects can tie into its NGFA efforts both directly and in complementary ways.
Unmanned Combat Aerial Vehicles (UCAV)
Be it ‘loyal wingman’ drones or deep-strike aircraft, future UCAV employment will require varying degrees of autonomous operation. In fact, Pakistan may already have a basis to start developing a ‘loyal wingman’ UAV by leveraging its existing cruise missile and target drone technologies [end of excerpt, subscribe to Quwa Premium to read the full article].
by Ejaz Haider
Azerbaijan could either just swallow Armenian intransigence or wait for the right opportunity. It gambled on the latter.
But, and that’s important: war is serious business and cannot be undertaken lightly.
At the politico-strategic level, the growing differential between Azeri and Armenian economies unfolded in Baku’s favour. The bigger economy (oil revenues, tourism, higher exports etc) allowed Baku to spend more on defence. However, except for 2015 when Azerbaijan’s defence spending rose to 5.6% of its GDP, it averaged at just below 4% between 2009 and 2019. Armenia, while spending relatively more on defence as a percentage of its GDP, averaging 4.5%, could not catch up given the much smaller size of its economy. According to data by the Stockholm International and Peace Research Institute, Baku spent some USD24 billion on defence between 2009 and 2018. Armenia spent a little over USD4 billion for the same period.
Nonetheless, the economy is just one factor, though a very important one. A state intending to go to war must also have its diplomatic flanks covered. Armenia has always been a close ally of Russia. Russia’s relations with Azerbaijan have seen ebbs and flows. However, since 2018, Armenia-Russia relations despite a military pact (Russia also maintains a base in Armenia) have been strained while Moscow’s relations with Baku have improved.
Azerbaijan also has very close relations with Turkey for historical, ethnic and linguistic reasons. Armenia and Turkey have historically been inimical. Azerbaijan and Turkey might be two separate states, but they consider themselves the same people. Azerbaijan also has strong ties with Israel. Turkey is also the second most important state player in the Caucasus and under President Recep Tayyip Erdoğan has developed a complicated relationship with Russia, which considers the former Soviet republics as Moscow’s sphere of influence.
Azerbaijan has been buying military equipment from Turkey, Israel and Russia. Its military has very close ties with the Turkish military; both sides have been conducting joint exercises and Turkey has been training Azeri officers and ranks. Azerbaijan’s military training, deployment, employment of equipment and doctrinal development owes greatly to the Turkish military.
Lesson 2: if a state wants to go to war, it must have strong backers.
Azerbaijan also has a strong legal case on its conflict with Armenia and the separatist Armenian government (not even recognised by Armenia for that reason). The UN resolutions completely support Azerbaijan’s claims on N-K.
Lesson 3: it’s always good to have a strong legal case if a state wants to use force.
This is as far as the politico-strategic environment is concerned and Azerbaijan managed to create its asymmetric advantage over Armenia at that level.
But war, in the end, is a contest where the will of the fighting sides is tested. That’s where we come to the military-operational level. The lessons at this level are quite fascinating.
From the actual conduct of war it is clear that Armenia was fighting the previous war (when it had an edge) while Azerbaijan had planned its offensives for the present war. It showed superior planning (the opening phase targeted the relatively flatter southern districts abutting N-K) and execution. Here are some lessons.
How does drone warfare impact India’s preparedness?
With neighbours such as Pakistan and China, threat lies for India at any given point of time. Bolstering its military with the latest technology is the need of the hour, for which India has already been making moves in the combat drone/UCAV spectrum. The Indian Army is in possession of around 90 Heron Surveillance drones and the Harop loitering munition. Additionally, the army is planning to acquire more of these from Israel.
In August this year, the defence approved the upgrade of Heron UAVs. The upgrade will include arming some of these drones, sources in Indian security establishment said. The decision comes amid the India-China standoff as the Indian military is preparing to enhance its surveillance capabilities at the Line of Actual Control (LAC). The Heron UAVs are already being used in the forward areas of Ladakh.
India is also looking to expedite its testing of the indigenous surveillance drones ‘Rustom-2’ before inducting them into service.
During the defence expo in Lucknow in February this year, Hindustan Aeronautics Limited (HAL) and Israel Aerospace Industries (IAI) from Israel and Dynnamatic Technologies Limited signed an agreement for manufacturing of drones.
The Indian Army also opted for the SpyLite mini-UAV for high-altitude aerial surveillance. This is built by Cyient Solutions & Systems (CSS), a joint venture between Cyient Ltd (India) and BlueBird Aero Systems (Israel).
With the opening of the American drone market, India is also exploring the possibility of acquiring several GA-ASI MQ-9 Reapers from the US subject to approval.
Talking about threats from neighbours, Pakistan has a plethora of options to choose from if it decides to expand its already existing combat drone options. Both Turkey and China design and manufacture high-end drone equipment. On the other hand, India will hope to bank upon Israel and the US.
With regards to the use of combat drones in our part of the world (read India’s border with Pakistan and China) drone warfare may not be as successful as it was in the Armenian context. This is because both India and Pakistan have heavy air defence systems.
Unless India completely dominates the air warfare, drones may not be as successful when it comes to combat operations. The induction of Rafale may help India with this regard.
China is the bigger player when it comes to drones and unmanned aerial vehicles (UAVs). It has invested a lot of effort in developing civilian drones and the same has been translated into them developing combat drones. China is one of the leading countries when it comes to R&D concerning drone technology.
China also possesses anti-drone technology used to jam signals that disrupt drones or shoot incoming drones in order to divert or destroy them.
With China’s growing dominance in global drone market and Pakistan’s proximity with Beijing, India needs to quickly adapt to the changing game of drone warfare as it is likely to become even more prevalent in coming years.
UCAVs also have a less carrying capacity compared to fighter jets. Hence, they are used in small but precise attacks rather than air-based raids that jets usually engage in. Azerbaijan used a new method of precision warfare that best compliments the use of such drones. This was only possible for rich and well-established militaries before, but now technology has made this more accessible to countries like Azerbaijan.
To name a few, countries with outstanding border conflicts include India, Pakistan, Serbia, Ukraine and many others. All these nations have already started purchasing attack drones and UCAVs.
The combat drone market can further explode by the Trump administration's push to deregulate their armed drone sales in a bid to allow the US manufacturers to compete in an export market dominated by China, Israel and Turkey.
on Strategic Stability
and Nuclear Risk
South Asian Perspectives
edited by petr topychkanov
The ongoing renaissance of artificial intelligence (AI) is reshaping the world. Just
like many other developing countries, India and Pakistan—the two nuclear-armed
states of South Asia—are exploring the subsequent opportunities for economic and
social change. Their political leaders seem to prioritize civilian applications of AI
over the military, and public attention reflects the political priorities. National
efforts to militarize AI do not receive the same public coverage as civilian AI
Meanwhile, according to the available open-source information, India and
Pakistan are increasingly interested in the potential benefits of AI for defence and
security. This might be one of the reasons why an expert debate on the opportunities and risks posed by the AI renaissance in the military realm has started
in recent years. However, the debate suffers from large gaps, particularly in the
emerging discussion on the potential impact of AI on strategic stability and
nuclear risk in South Asia. This issue has been underexplored by scholars studying South Asia from both inside and outside the region.
This edited volume—which follows earlier volumes on Euro-Atlantic and East
Asian perspectives—tries to fill the gaps in the scholarly debate on this important
topic and to facilitate further regional debate. It is based on a workshop held in
Colombo in February 2019. The eight expert contributors—from South Asia and
around the world—reflect the variety of issues, approaches and views.
It is clear from a comparative study of the state of adoption of AI in South
Asia that India and Pakistan are playing catch-up in the world competition on
military AI. Compared to the United States, China and Russia, India’s advances
are modest, while Pakistan’s are even less visible. One of the reasons seems to
be under-resourcing and inefficiencies in defence research and state industries.
These prohibit the development and adoption of emerging technologies within a
reasonable time frame.
However, according to contributors from India and Pakistan, both countries are well aware of the strategic significance of AI. They see AI as one of many enablers of the mutual strategic balance. India must also take into consideration the role of AI in the military build-up of China, one of its long-term security concerns.
In assessing the strategic significance of AI, the expert contributors—regardless of their origin—agree that AI is a double-edged sword. On the one hand, AI could enhance nuclear command and control, early warning, intelligence surveillance and reconnaissance (ISR), and the physical security of nuclear capabilities, among other areas. In this way it would improve states’ sense of security. On the other hand, the same advances could cast doubt on the survivability of their respective second-strike capabilities. This doubt would stimulate more aggressive nuclear postures that could increase nuclear risk.
The prospects of machines to execute warfare tasks by the interaction of its embedded sensors computer programming, and algorithms, devoid of any human involvement is indeed becoming a concrete reality. Autonomous weapons are not a new technology, yet the magnitude of the autonomy that is bestowed upon them is intensifying with every coming day. These autonomous weapons are quite captivating and are being pursued by different states.
Technological advancement has been the hallmark of major powers. A new arms race has been ignited in the sector of militarization of AI where states are endeavoring to gain comparative advantage over one another. Militaries are eyeing these new technical developments which will not only augment their capabilities but also make their networks more secure. USA was the first country to incorporate AI in its military. This was followed by China and Russia who have galvanized their efforts to avert lagging in the respective field. President Vladimir Putin has highlighted the importance of AI by stating, “Artificial intelligence is the future, not only for Russia, but for all humankind,” .Furthermore he added, “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Currently, USA and China are the front runners in the race of AI. In 2014, the Department of Defense released the “Third Offset Strategy” which pressed upon the intensification of the integration of AI in U.S. military. Likewise, the military is also seeking a program under the name of Joint Enterprise Defence Infrastructure (Jedi), a cloud computing setup that serves U.S. forces around the world by pooling data and distributing workload along with critical information to U.S. soldiers. Apart from the logistics, AI is also being incorporated in the combat sector. USA has integrated a number of autonomous weapons, prominent among which are the autonomous ship Sea Hunter which can operate without even a single crew member, autonomous aerial devices such as Loyal Wingman and swarms of drones which are also the fruits of AI.
Similarly, China is also investing a towering amount of money in AI and has indicated its desire to become the AI leader by 2030.The reason lies in the notion that AI can be instrumental to overcome the numerical superiority that the U.S. exercises at the moment, which can only be outdone by sophistication which AI steers along. Hence, AI has become a strategic priority for China. Consequently, the Chinese military is being modernized with this technology at a fast pace. China has made substantial progress in AI powered aerial vehicles and maritime drones. Similarly, the development of autonomous submarines and the incorporation of AI in cruise missiles is also being carried out swiftly.
Despite the fact that USA now has a leverage over every other state, China is catching up at an extremely fast pace. One of the reasons for such expedited progress is that the source of AI is the private sector. In China, the private sector has to abide by the orders of the government contrary to USA where the private companies are reluctant to weaponize their work and are not bound to fulfil the government’s demands. In 2018, The Google employees staged protests on the account of Google’s involvement in Project Maven, an initiative to use AI to bolster the targeting capabilities of drones. This opposition pressurized Google to dissociate itself with the respective project.
Pakistani agritech firm Industrial Vision Systems is intent on selling its Artificial Intelligence (AI) technology for scanning and grading fruits and vegetables to farmers in Jordan, UAE and Saudi Arabia.
The start-up has developed a portable machine that uses AI to grade fruits, vegetables and dates. First it scans them through a camera which is connected to a computer, and then grades them by identifying their defects and creating a complete dataset on the basis of their quality.
The tech should benefit both farmers and the agriculture industry in Pakistan by enhancing export earnings and modernizing agricultural practices in the country. Being developed in Pakistan, this AI machine is also a cost effective solution to Pakistani farmers' many needs.
CEO and Co-founder of Industrial Vision Systems, Saad Tanveer Ahmed, says that the firm is engaging with farmers in several Middle Eastern countries with hopes to sell the technology there. This AI machine will help date farmers in the Middle East by allowing them to grade their dates without human shortcomings and high labor costs.
British TV regulatory authority Ofcom has imposed a £20,000 fine on Republic Bharat, Republic TV’s Hindi channel, for hate speech against Pakistani people in a programme broadcast last year.
Ofcom, which stands for Office of Communications, is the government-approved regulatory and competition authority for the broadcasting, telecommunications and postal industries of the United Kingdom
In a detailed note on its decision, Ofcom said that Republic Bharat’s Poochta Hai Bharat programme – the evening primetime show hosted by Arnab Goswami – had failed to comply with its broadcasting rules.
According to Ofcom, an episode, shown on September 6, 2019, featured “comments made by the host and some of his guests that amounted to hate speech against Pakistani people, and derogatory and abusive treatment of Pakistani people. The content was also potentially offensive and was not sufficiently justified by the context.”
At the time, the atmosphere was charged with Pakistan’s critical reaction to India taking away Jammu and Kashmir’s special status and breaking up the state into two Centrally-ruled UTs. But the regulator did not accept this as an extenuating circumstance.
By the time the episode aired, Ofcom had already notified Republic that it had been receiving a number of complaints on content broadcast by it in relation to “highly pejorative references to members of the Pakistani community (e.g. continually referring to them as “filthy”)”.
Worldview Media Network Limited, the licensee which airs Republic Bharat in the UK, will also need to broadcast a statement of Ofcom’s findings and is barred from repeating the programme in the UK.
The show under the scanner was a 35-minute discussion that hinged upon India’s Chandrayaan mission but sought to encompass a larger narrative on how India was advanced in space science and its neighbour Pakistan, was not.
Among participants were Major Gaurav Arya, Maj General K.K. Sinha, Prem Shukla of the BJP, and Omar Inam and Omar Altaf from Pakistan. A third Pakistani guest remained unidentified by Ofcom, and according to the transcription, was largely unable to get a word in.
“The host and the Indian guests dominated the discussion, with the Pakistani guests attempting to respond but largely being shouted down by the presenter and Indian guests,” Ofcom’s note says.
From the discussion which was often chaotic enough to flummox the transcriber, Ofcom gleaned that “statements were made which implied not just that there are threats to Indian interests and citizens from particular people and groups inside Pakistan, but that all Pakistanis represent a terrorist threat to Indians and others.”
The statements made in the show by guests, and including the host Goswami, said Ofcom,
“conveyed the view that all Pakistani people are terrorists, including that: “their scientists, doctors, their leaders, politicians all are terrorists. Even their sports people”; “every child is a terrorist over there. Every child is a terrorist. You are dealing with a terrorist entity”. One guest also described Pakistani scientists as “thieves”, while another described Pakistani people as “beggars”.”
America's two main adversaries are just as keenly aware of how AI supremacy could lead to battlefield supremacy and are making just as much investment into AI as the new NSCAI report recommends America does. In 2017, the Chinese government issued a statement that technological advances, including in AI, would make China the global leader by 2030. “By 2030, our country will reach a world-leading level in artificial intelligence theory, technology and application and become a principal world center for artificial intelligence innovation,” the CCP claimed. That same year, Russian President Vladimir Putin made similar comments, claiming that the path to global supremacy is paved with AI. “Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin said. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” Both Russia and China are developing their own unmanned combat aerial vehicles, and both have been accused of leveraging AI-powered cyberattacks or misinformation campaigns against the United States.
Seventy-five percent work of Pakistan first high-standard artificial intelligence laboratory under CPEC at National University of Science and Technology (NUST) has been completed while the equipment installation is almost 100% finished, Gwadar Pro reported on Saturday.
At the beginning of this year, the laboratory under CPEC–Qingluan Artificial Intelligence Laboratory was officially established at NUST, with joint efforts of NUST and Guangzhou Institute of Chinese Academy of Sciences.
Research, development and customization is currently underway. I would say work is almost finished to 75%.” Muhammad Khubaib Shabbir, Deputy Director of China Study Center of NUST told Gwadar Pro.
The lab has been put into full use, both students and teaching staff are keen on researching Pattern and Facial Recognition algorithms, the reporter learned.
“Currently, Cogniser-V1 intelligent video analysis project-a pilot project with the Government of Pakistan, and a commercial project, namely GymBot are the main projects that are under development.” Muhammad Khubaib Shabbir revealed.
“Ideally, Cognizer-V1 is one of the most sophisticated surveillance equipment, which has the capability of converting ordinary cameras and surveillance equipment into a Smart Equipment, using AI and Computer Vision Algorithms.” Muhammad Khubaib Shabbir said.
“To put it simple, the Cognizer-V1 has the ability to sense the people who are lurking around in certain areas and generate warnings, regarding dangerous behavioral patterns such as suicide, or other suspicious activities.” Muhammad Khubaib Shabbir said.
In the case of Pakistan, the country is blessed with a large number of artificial intelligence application scenarios and a huge market, thanks to its world’s 6th largest population. Moreover, the country is never short on talents.
However, challenges lie in the commercialization of scientific achievements– an important step which can be viewed as one of the sources for innovation.
Due to the backward industrial conditions and obstruction of international exchanges during the epidemic, the progress of commercialization in Pakistani scientific research institutes has been extremely slow.
“Our other key project, ‘GymBot’, can be a perfect example of science commercialization. It is designed to be a deep learning device, using AI and Computer Vision Algorithms and serve as an auxiliary tool under various gym scenarios, monitoring whether the clients’ postures are correct.
Experts in various fields are joining the research team to finalize the product. The core functions have been developed already. Now what the team is doing is developing additional modules to integrate and research new areas to better customize the device.” Muhammad Khubaib Shabbir shared his insights.
“It is important to keep in mind that Guangzhou Institute of Chinese Academy of Sciences has shared the source code for ‘GymBot’. This enabled the researchers from Pakistan to get first-hand experience of the latest results on AI developments and offered them a chance to learn from it, enhance it and make it more usable for the local community. This will most definitely open new doors of opportunities for Pakistanis.”
Moving ideas from lab to marketplace is a complicated journey. Researchers and stakeholders need to manage the time-consuming process of moving from academic to commercial contexts, and seek balance between different goals amongst stakeholders and researchers.
CPEC enables the exchanges of advanced concepts, from both technical and management level. Qing Luan lab can be one of the successful examples.
After all, the AI chatbot seems to be slaying a great deal of search engine responses.
ChatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released two weeks ago, and in just five days hit a million users. It’s being used so much that its servers have reached capacity several times.
OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side by side.)
But what if we never know the secret sauce behind ChatGPT’s capabilities?
The chatbot takes advantage of a number of technical advances published in the open scientific literature in the past couple of decades. But any innovations unique to it are secret. OpenAI could well be trying to build a technical and business moat to keep others out.
What it can (and can’t do)
ChatGPT is very capable. Want a haiku on chatbots? Sure.
How about a joke about chatbots? No problem.
ChatGPT can do many other tricks. It can write computer code to a user’s specifications, draft business letters or rental contracts, compose homework essays and even pass university exams.
Just as important is what ChatGPT can’t do. For instance, it struggles to distinguish between truth and falsehood. It is also often a persuasive liar.
ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs.
However, it doesn’t understand what it’s saying, just what words are most likely to come next.
Open only by name
In the past, advances in artificial intelligence (AI) have been accompanied by peer-reviewed literature.
In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers, and the code was open-sourced.
And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine.
Following the release of ChatGPT, we have only a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-sourced.
To understand why ChatGPT could be kept secret, you have to understand a little about the company behind it.
OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel, and other leading tech figures pledged US$1 billion (dollars) towards its goals.
Their thinking was we couldn’t trust for-profit companies to develop increasingly capable AI that aligned with humanity’s prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way.
In 2019 OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion(dollars) investment from Microsoft so it could scale and compete with the tech giants.
It seems money got in the way of OpenAI’s initial plans for openness.
Profiting from users
On top of this, OpenAI appears to be using feedback from users to filter out the fake answers ChatGPT hallucinates.
According to its blog, OpenAI initially used reinforcement learning in ChatGPT to downrank fake and/or problematic answers using a costly hand-constructed training set.
By Cade Metz
Sam Altman sees the pros and cons of totally changing the world as we know it. And if he does make human intelligence useless, he has a plan to fix it.
I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his three-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.
Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or A.G.I., a machine that could do anything the human brain could do.
Later, as Mr. Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI — the level of ambition we aspire to.”
He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
He told me that it would be a “very slow takeoff.”
When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
If he’s wrong, he thinks he can make it up to humanity.
He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.
His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
But as he once told me: “I feel like the A.G.I. can help with that.”