Brothers From Rural Pakistan Teaching AI to American High-Schoolers

Haroon and Hamza Choudhry, born in rural Pakistan, are teaching artificial intelligence (AI) American high-schoolers. Twenty-something Choudhry brothers were 8 and 6 when they came from Pakistan to the United States in 1998. They have co-founded "AI For Anyone", a Brooklyn-based non-profit organization that sends volunteers to teach artificial intelligence to high school students. A recent study shows that Pakistani-Americans are among the top 5 most upwardly mobile groups in the United States. Other top most upwardly groups are Chinese-Americans from Hong Kong, Taiwan and People's Republic of China and Indian-Americans. Pakistani-Americans are known to volunteer for non-profit organizations like AIForAnyone to help the communities they live in. Several Pakistani-Americans are successful social entrepreneurs.

Hamza (right) and Haroon Choudhry in their village in Pakistan

Choudhrys lived with 9 relatives in a 2-bedroom apartment in Brooklyn, and later on a poultry farm in Maryland on the Eastern Shore of Maryland. Their father worked several different odd jobs to make ends meet, according to a CNBC report.

In addition to their volunteer work at AI For Anyone, both brother work in high tech positions. Here's how CNBC describes their education careers:

"Haroon won a Gates Millennium scholarship, which gave him a full ride (including tuition, housing, food and transportation) to both Penn State for undergrad and to University of California, Berkeley, where he got his masters in information and data science. After college, Haroon did data science work for Mark Cuban Companies and was a technology consultant at Deloitte Consulting. He is now a data scientist at Komodo Health. Hamza graduated magna cum laude from the University of Maryland. He previously worked at Facebook, and now works in business operations at WeWork."

A recent study shows that Pakistani-Americans are among the top 5 most upwardly mobile groups in the United States. Other top most upwardly groups are Chinese-Americans from Hong Kong, Taiwan and People's Republic of China and Indian-Americans. Pakistani-Americans are known to volunteer for non-profit organizations like AIForAnyone to help the communities they live in.

Knowledge of artificial intelligence (AI) is becoming increasingly important. Pakistan and Pakistanis can not afford to be left behind in the world of AI. Koshish Foundation, an organization funded primarily by NED University Alumni in Silicon Valley, helped fund Koshish Foundation Research Lab (KFRL) in Karachi back in 2014. It has since received additional funding from numerous national and international organizations including DAAD,  German Academic Exchange Service. The lab has been renamed RCAI- Research Center For Artificial Intelligence.

Artificial Intelligence (AI) Applications
In a letter addressed to NEDians Suhail Muhammad and Raghib Husain,  the RCAI director Dr. Muhammad Khurram said, "I would really like to thank you (and Koshish Foundation) who helped me in making things happen in the start. Still, a lot needs to be done."

Dr. Ata ur Rahman Khan, former chairman of Pakistan Higher Education Commission (HEC), believes there is significant potential to grow artificial intelligence technology and products. In a recent Op Ed in The News, Dr. Khan wrote as follows:

"Pakistan churns out about 22,000 computer-science graduates each year. With additional high-quality training, a significant portion of these graduates could be transformed into a small army of highly-skilled professionals who could develop a range of AI products and earn billions of dollars in exports."

It's notable that Pakistan's tech exports are growing by double digits and surged past $1 billion in fiscal 2018, according to State Bank of Pakistan.

Dutch publication innovationorigins.com recently featured a young Pakistani Tufail Shahzad from Dajal village in Rajanpur District in southern Punjab. Tufail has studied artificial intelligence at universities in China and Belgium.  He's currently working in Eindhoven on artificial intelligence (AI) projects as naval architect and innovation manager at MasterShip Netherlands.

There is at least one Pakistani AI-based startup called Afiniti, founded by serial Pakistani-American entrepreneur Zia Chishti. Afiniti has recently raised series D round of $130 million at $1.6 billion valuation, according to Inventiva. Bulk of the Afiniti development team is located in Thokar Niaz Baig, Lahore. In addition, the company has development team members in Islamabad and Karachi.

Afiniti uses artificial intelligence (AI) algorithms to enable real-time, optimized pairing of individual call center agents with individual customers in large enterprises for best results. When a customer contacts a call center, Afiniti matches his or her phone number with any information related to it from up to 100 databases, according to VentureBeat. These databases carry purchase history, income, credit history, social media profiles and other demographic information. Based on this information, Afiniti routes the call directly to an agent who has been determined, based on their own history, to be most effective in closing deals with customers who have similar characteristics.

This latest series D round includes former Verizon CEO Ivan Seidenberg; Fred Ryan, the CEO and publisher of the Washington Post; and investors Global Asset Management, The Resource Group (which Chishti helped found), Zeke Capital, as well as unnamed Australian investors. Investors in Afiniti's C series round included GAM; McKinsey and Co; the Resource Group (TRG); G3 investments (run by Richard Gephardt); Elisabeth Murdoch; Sylvain Héfès; John Browne, former CEO of BP; Ivan Seidenfeld; and Larry Babbio, a former president of Verizon. The company has now raised more than $100 million, including the money previously raised, according to VentureBeat's sources.

Drone is an example of artificial intelligence application. It now a household word in Pakistan. Drones outrage many Pakistanis when used by Americans to hunt militants and launch missiles in FATA. At the same time, drones inspire a young generation of students to study artificial intelligence at 60 engineering colleges and universities in Pakistan. It has given rise to robotics competitions at engineering universities like National University of Science and Technology (NUST) and my alma mater NED Engineering University. Continuing reports of new civilian uses of drone technology are adding to the growing interest of Pakistanis in robotics.

Dr. Ata ur Rehman Khan rightly argues in his Op Ed that AI should be an area of focus for research and development in Pakistan. He says that "the advantage of investing in areas such as artificial intelligence is that no major investments are needed in terms of infrastructure or heavy machinery and the results can become visible within a few years".  "Artificial intelligence will find applications in almost every sphere of activity, ranging from industrial automation to defense, from surgical robots to stock-market assessment, and from driverless cars to agricultural sensors controlling fertilizers and pesticide inputs", Dr. Khan adds.

Hamza and Haroon Choudhry brothers, co-founders of AIForAnyone, are an example of a recent study that shows that Pakistani-Americans are among the top 5 most upwardly mobile groups in the United States. Other top most upwardly groups are Chinese-Americans from Hong Kong, Taiwan and People's Republic of China and Indian-Americans. Pakistani-Americans are known to volunteer for non-profit organizations like AIForAnyone to help the communities they live in. Several Pakistani-Americans are successful social entrepreneurs.

Related Links:

Haq's Musings

South Asia Investor Review

NED Alum Raises $100 Million For FinTech Startup in Silicon Valley

Pakistani-Americans Among Top 5 Most Upwardly Mobile Ethnic Groups

NED Alum Raghib Husain Sells Silicon Valley Company for $7.5 Billion

Pakistan's Tech Exports Surge Past $1 Billion in FY 2018

NED Alum Naveed Sherwani Raises $50 Million For SiFive Silicon Valley Startup

OPEN Silicon Valley Forum 2017: Pakistani Entrepreneurs Conference

Pakistani-American's Tech Unicorn Files For IPO at $1.6 Billion Valuation

Pakistani-American Cofounders Sell Startup to Cisco for $610 million

Pakistani Brothers Spawned $20 Billion Security Software Industry

Pakistani-American Ashar Aziz's Fireeye Goes Public

Pakistani-American Pioneered 3D Technology in Orthodontics

Pakistani-Americans Enabling 2nd Machine Revolution

Pakistani-American Shahid Khan Richest South Asian in America

Two Pakistani-American Silicon Valley Techs Among Top 5 VC Deals

Pakistani-American's Game-Changing Vision 

Comments

Riaz Haq said…
AI drives driverless trucks being tested right now on public roads
60 Minutes climbs aboard for a look at the very near future of transportation and technology that could eliminate as many as 300,000 jobs, Sunday.

https://www.cbsnews.com/news/driverless-trucks-being-tested-on-public-roads-60-minutes-2020-03-13/

Few are aware that driverless 18-wheelers are already on the road. The test runs on highways have humans in them just in case sensors or computers fail, but an autonomous trucking executive says by next year, they won't. The future of freight on America's roads can be a driverless one, this executive says. And that's news to many, especially the truck drivers who stand to lose their livelihoods. 60 Minutes cameras ride aboard a test run and Jon Wertheim reports on the potential disruption to a storied American industry on the next edition of 60 Minutes, Sunday, March 15 at 7 p.m. ET/PT on CBS.

"We believe we'll be able to do our first driver-out demonstration runs on public highways in 2021," says Chuck Price, chief product officer at TuSimple, an autonomous trucking firm with operations in the U.S. and China. With a proving ground in Arizona, TuSimple is one of several firms hoping to make billions in an industry that moves over 70% of the nation's goods

Sensors, cameras and radar devices affixed to the rig feed data to the artificial intelligence-driven supercomputer that controls the truck. Price says his product is superior to others. "Our system can see farther than any other autonomous system in the world. We can see forward over a half-mile… day, night and in the rain. And in the rain at night," he says.

Maureen Fitzgerald, a truck driver who works for TuSimple, says the system drives the truck better than she could. "This truck is scanning mirrors, looking 1,000 meters out. It's processing all the things that my brain could never do and it can react 15 times faster than I could," says Fitzgerald.

Steve Viscelli is a sociologist at the University of Pennsylvania and an expert on freight transportation and automation. He says the disruption to the industry will be severe, "I've identified two segments that I think are most at-risk. And that's-- refrigerated and dry van truckload. And those constitute about 200,000 trucking jobs," says Viscelli. "And then what's called line haul and they're somewhere in the neighborhood of 80,000-90,000 jobs there."

Truckers 60 Minutes spoke to were understandably wary of the new technology, especially how it will react when a human, such as a police officer, issues commands on the road in an emergency. The companies say they're working on all these scenarios, but won't divulge business secrets. That's a problem for Sam Loesche, a representative for the Teamsters and 600,000 truckers. He thinks there isn't enough federal, state or local government oversight on the new technology. "A lot of this information, understandably, is proprietary. Tech companies want to keep… secret until they can kind of get it right. The problem is that, in the meantime, they're testing this technology… next to you as you drive down the road," Loesche tells Wertheim.
Riaz Haq said…
What is ChatGPT? The AI chatbot talked up as a potential Google killer
After all, the AI chatbot seems to be slaying a great deal of search engine responses.

https://interestingengineering.com/science/chatgpt-ai-chatbot-google-killer

ChatGPT is the latest and most impressive artificially intelligent chatbot yet. It was released two weeks ago, and in just five days hit a million users. It’s being used so much that its servers have reached capacity several times.

OpenAI, the company that developed it, is already being discussed as a potential Google slayer. Why look up something on a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both, side by side.)

But what if we never know the secret sauce behind ChatGPT’s capabilities?

The chatbot takes advantage of a number of technical advances published in the open scientific literature in the past couple of decades. But any innovations unique to it are secret. OpenAI could well be trying to build a technical and business moat to keep others out.

What it can (and can’t do)
ChatGPT is very capable. Want a haiku on chatbots? Sure.

How about a joke about chatbots? No problem.

ChatGPT can do many other tricks. It can write computer code to a user’s specifications, draft business letters or rental contracts, compose homework essays and even pass university exams.

Just as important is what ChatGPT can’t do. For instance, it struggles to distinguish between truth and falsehood. It is also often a persuasive liar.

ChatGPT is a bit like autocomplete on your phone. Your phone is trained on a dictionary of words so it completes words. ChatGPT is trained on pretty much all of the web, and can therefore complete whole sentences – or even whole paragraphs.

However, it doesn’t understand what it’s saying, just what words are most likely to come next.

Open only by name
In the past, advances in artificial intelligence (AI) have been accompanied by peer-reviewed literature.

In 2018, for example, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT is too), the methods were published in peer-reviewed scientific papers, and the code was open-sourced.

And in 2021, DeepMind’s AlphaFold 2, a protein-folding software, was Science’s Breakthrough of the Year. The software and its results were open-sourced so scientists everywhere could use them to advance biology and medicine.

Following the release of ChatGPT, we have only a short blog post describing how it works. There has been no hint of an accompanying scientific publication, or that the code will be open-sourced.

To understand why ChatGPT could be kept secret, you have to understand a little about the company behind it.

OpenAI is perhaps one of the oddest companies to emerge from Silicon Valley. It was set up as a non-profit in 2015 to promote and develop “friendly” AI in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel, and other leading tech figures pledged US$1 billion (dollars) towards its goals.

Their thinking was we couldn’t trust for-profit companies to develop increasingly capable AI that aligned with humanity’s prosperity. AI therefore needed to be developed by a non-profit and, as the name suggested, in an open way.

In 2019 OpenAI transitioned into a capped for-profit company (with investors limited to a maximum return of 100 times their investment) and took a US$1 billion(dollars) investment from Microsoft so it could scale and compete with the tech giants.

It seems money got in the way of OpenAI’s initial plans for openness.

Profiting from users
On top of this, OpenAI appears to be using feedback from users to filter out the fake answers ChatGPT hallucinates.

According to its blog, OpenAI initially used reinforcement learning in ChatGPT to downrank fake and/or problematic answers using a costly hand-constructed training set.
Riaz Haq said…
Why do your homework when a chatbot can do it for you? A new artificial intelligence tool called ChatGPT has thrilled the Internet with its superhuman abilities to solve math problems, churn out college essays and write research papers.

https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homework-academia

After the developer OpenAI released the text-based system to the public last month, some educators have been sounding the alarm about the potential that such AI systems have to transform academia, for better and worse.

"AI has basically ruined homework," said Ethan Mollick, a professor at the University of Pennsylvania's Wharton School of Business, on Twitter.

The tool has been an instant hit among many of his students, he told NPR in an interview on Morning Edition, with its most immediately obvious use being a way to cheat by plagiarizing the AI-written work, he said.

Academic fraud aside, Mollick also sees its benefits as a learning companion.

He's used it as his own teacher's assistant, for help with crafting a syllabus, lecture, an assignment and a grading rubric for MBA students.

"You can paste in entire academic papers and ask it to summarize it. You can ask it to find an error in your code and correct it and tell you why you got it wrong," he said. "It's this multiplier of ability, that I think we are not quite getting our heads around, that is absolutely stunning," he said.

A convincing — yet untrustworthy — bot
But the superhuman virtual assistant — like any emerging AI tech — has its limitations. ChatGPT was created by humans, after all. OpenAI has trained the tool using a large dataset of real human conversations.

"The best way to think about this is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you," Mollick said.

It lies with confidence, too. Despite its authoritative tone, there have been instances in which ChatGPT won't tell you when it doesn't have the answer.

That's what Teresa Kubacka, a data scientist based in Zurich, Switzerland, found when she experimented with the language model. Kubacka, who studied physics for her Ph.D., tested the tool by asking it about a made-up physical phenomenon.

"I deliberately asked it about something that I thought that I know doesn't exist so that they can judge whether it actually also has the notion of what exists and what doesn't exist," she said.

ChatGPT produced an answer so specific and plausible sounding, backed with citations, she said, that she had to investigate whether the fake phenomenon, "a cycloidal inverted electromagnon," was actually real.

When she looked closer, the alleged source material was also bogus, she said. There were names of well-known physics experts listed – the titles of the publications they supposedly authored, however, were non-existent, she said.

"This is where it becomes kind of dangerous," Kubacka said. "The moment that you cannot trust the references, it also kind of erodes the trust in citing science whatsoever," she said.

Scientists call these fake generations "hallucinations."

"There are still many cases where you ask it a question and it'll give you a very impressive-sounding answer that's just dead wrong," said Oren Etzioni, the founding CEO of the Allen Institute for AI, who ran the research nonprofit until recently. "And, of course, that's a problem if you don't carefully verify or corroborate its facts."

Popular posts from this blog

Olive Revolution: Pakistan Joins International Olive Council

Pakistani Women's Growing Particpation in Workforce

Pakistani-American Banker Heads SWIFT, The World's Biggest InterBank Payments System