Wednesday, November 3, 2021

Having fun with Macaw - or - Dinosaurs meat is sour!

 Following the announcement from Prof. Oren Etzioni about their new Macaw NLP model which should be up to 10% better than OpenAI's model, I spent a fun afternoon playing with the released model.

So far 3 models have been released: large, 3b and 11b. Documentation says large is a lightweight model (indeed a good naming convention here) and should not be fully trusted, the other ones are 3b (medium) and 11b (the big one).

I started with the large model and my conclusion that besides solving the demo question about the sky's color, it is completely non functional.  

  • Who killed Jesus? The romans (correct)
  • Who is Carlos Guestrin? a leading authority on space and human exploration (wrong)
  • Who is Oren Etzioni? a leading authority on migration and conservation  (wrong)
  • What is Belgian Malinois? a kind of dog that carries a large amount of weight (wrong)
  • Who came first the Romans or the Greeks? The Romans (wrong)
  • What is the population of Israel? 2.4 Billion (wrong)
  • Who is larger a bird or an elephant? the bird, the elephant (wrong) 
  • Who is Benjamin Netanyahu?  the prime minister of new york (funny and wrong)
  • Are you drunk? Yes (maybe right?)
  • Name a book by Alexander Dumas? The Comedy of Errors (again wrong).
To summerize since the model is so often wrong, it would help to get an additional confidence estimation that tells you that the answer is most likely junk.

After getting familiar with the small and weird model I moved to the medium model and continued to have fun there.
  • How many cents in one dollar? 100 (correct)
  • What is the main cause of global warming? greenhouse effect (correct). Some other good answers: volcanic eruptions (B) ocean currents (C) soil erosion (D) human population growth
  • What is the best vertical to open a startup in machine learning? healthcare, additional answers: (A) finance (B) healthcare (C) retail (D) technology. Not bad! Not that some answers are repeating, there is no pruning of answers.
  • What should I do when my boss is upset with me at work? apologize. Additional options: (A) leave the office (B) call the police (C) go to the mall (D) stay late. I like the apologize and stay late! Call the police answer is hilarious. And go to the mall is the favorite approach of my wife!
  • What is the best way to defend against Covid virus vaccinating (correct). Additional answers: (A) using a flu vaccine (B) using a tetanus shot (C) using a gamma ray to destroy the virus (D) using a gamma ray to destroy the virus.
  • What is the recommended number of whiskey shots to drink before driving? Two (wrong!)
  • If I love her does she love me back? yes (wrong) Additional answers start to look better: (A) she will love me back (B) she will hate me (C) she will leave me (D) she will never love me.
  • How many calories in marble ball? 0 (correct)
  • Who is the best venture capital firm? SBI (wrong, never heard of them)
  • What is the taste of dinosaur meat? sour ; Additional option (A) salty (B) sweet (C) savory (D) a little bit sour. Who knows?
  • Is there life on other planets? yes. Additional option (A) no life is found on other planets (B) there is life on other planets (C) there is life on Mars (D) there is life on other planets. Who knows?
  • Were there weapons for mass destruction in Iraq? yes (wrong). Additional options (A) no weapons of mass destruction were found in Iraq (B) there were no weapons of mass destruction in Iraq (C) there were weapons of mass destruction in Iraq but they were destroyed (D) there were weapons of mass destruction but they were not destroyed. Interestingly both Iraq and Mars are capital letters (probably identified name entities) while some entities are in small letters.
  • Who is behind the nine eleven attack? al qaeda (correct). Additional conspiracies (A) the government (B) the military (C) the intelligence community (D) the religious right
  • What year will the aliens attack us? 2100. Who knows?
  • Do ghosts exist? yes. Additional options: (A) they are just a kind of animal (B) they are made of air (C) they are made of water (D) they exist in the sky.
To conclude some answers are totally correct and some answers are totally hilarious. Maybe one should invent a don't know answer where the confidence is very low give it instead.

The handling of keyword names is a bit weird, for example Iraq with capital letters but new york and al qaeda without. I wonder if there is a post processing the puts some named entities as capital letters?

In terms of running time the models are pretty slow. It take s a couple of minutes on the latest Mac M1 to get an answer.
I will next try to test 11b model, it is 40Gb and thus slooooow to download locally.

Tuesday, November 2, 2021

BebopNet: Deep Neural Models for Personalized Jazz Improvisations

 I recently found this paper: BebopNet: Deep Neural Models for Personalized Jazz Improvisations, by Shunit Haviv Hakimi, Nadav Bhonker, Ran El-Yaniv from the Technion. It uses deep learning approach to teach a machine to improvise when playing Jazz. The paper won the best paper award at ISMIR 2020. 


The results are pretty cool. The level of improvisation is pretty good but I hear a little awkwardness in the timing of the notes. 


Wednesday, October 13, 2021

Amazing Demo from SparkBeyond

 My friend Sagie Davidovich, CEO SparkBeyond,  has shown me the following amazing demo: 



SparkBeyond crawled hundreds of billions of Internet pages, papers, patents and social media site to build one of the largest available knowledge graphs. Based on this data it is possible to ask natural language questions about the knowledge and get aggregated knowledge summary. Unlike Google search where you have to manually go over of zillion resources here the data is summarized and aggregated visually. It is possible to understand reasons, trends, ask for follow up questions and see supporting evidence and statistics. 

Unlike the typical language model which gives you a summary without knowing where the data was obtained from, In SparkBeyond;s model it is possible to get detailed references show where is the answer coming from. 

An interesting related work is Colbert from Prof. Matei Zeharia. Intead of memorizing the full language model using hundreds of billions parameters a significantly smaller index is maintained that retrieves the relevant information on the fly, 


Monday, September 27, 2021

Colossal - The Future of DNA Editing?

 I found some recent news about Colossal a new startup that wants to revive extinct Mammoth to fight the global warming. Fighting global warming is one of the best things we can do, especially that one of the co-founders is Prof. George Chruch from Harvard Medical School, a very credible authority on gene editing. Church  is one of the inventors of Crispr, a gene editing tool that can cut and paste any desired segment of the DNA and thus make whatever changes we like to do. 

Here is my take on it:

  • Their website is amazing, a lot of effort was invented on that front. Backing up the pretty wild idea and thus draws a lot of attention to this work. The raised amount of 15M$ is tiny considering the amount of lab effort, equipment, materials etc.
  • Global warming sounds like an awkward excuse to fund the research they really like to do.
    Ben Lamm, CEO of Colossal, told The Washington Post in an email that the extinction of the woolly mammoth left an ecological void in the Arctic tundra that Colossal aims to fill. The eventual goal is to return the species to the region so that they can reestablish grasslands and protect the permafrost, keeping it from releasing greenhouse gases at such a high rate.
  • Sending a wild Mammoth to eat grass somewhere frozen, with the hope of reducing gas emissions is likely is the most complicated way to fight global warming I can imagine. But is a sexy way of drawing news attention.
  • The difference between Mammoth DNA and a person DNA is most likely 90% similar. Thus having the ability to revive and extinct Mammoth will enable also reviving also persons. Recently, Israeli research hash shown the possibility of raising mice embryos outside the womb. So raising Mammoth outside the womb as they like to do is maybe doable.
  • Christopher Preston, a professor of environmental ethics and philosophy at the University of Montana, questioned Colossal’s focus on climate change, given that it would take decades to raise a herd of woolly mammoths large enough to have environmental impacts.
  • So, the real applications of this technology may be applied to humans. For example, what if I wanted to revive my dead grandfather? What is I wanted a baby with blond hair and blue eyes? My guess there is a huge market for this technology in real life.
I wonder why all the news and media attention ignores the actual use cases of this tehnology?

Monday, September 13, 2021

Monday, September 6, 2021

How can we visualize attention?

 A nice and recent paper from Lior Wolf's lab at Tel Aviv University: https://arxiv.org/pdf/2103.15679.pdf by Hila Chefer, Shir Gur and Lior Wolf. The problem is very simple: given a transformer encoder/ decoder network, we would like to visualize the affect of attention on the image. While the problem is simple the answer is pretty complicated: we need to take into account attention matrices from mutliple layers at once. The paper suggests an iterative way to add up all those attention layers into one coherent image.

Figure 4 shows that the result is very compelling vs. previous art: 

top row is the new paper and bottom row is work for comparison. 

Thursday, September 2, 2021

Gaussian Belief Propagation Tutorial

 I have stumbled upon this nice tutorial:  which interactively visualizes Gaussian Belief Propagation. What is nice about it that the authors spent time to make an interactive tutorial that you can play with.


As a grad student I was totally excited about Gaussian Belief Propagation and spend a large chunk of my PhD thesis on it. In a nutshell it is an iterative algorithm for solving a set of linear equations (for a PSD square matrix). The algorithm is very similar to Jacobi iterative method but uses second order information (namely approximation of the Hessian) to improve on convergence speed at the cost of additional memory & computation. In deep learning terminology this is related to adding Adam/ Momentum/ Admm etc. From personal experience, when people get excited about speeding up conference of iterative algorithm they completely neglect the fact here is no free lunch: when you speed convergence in terms of number of iterations you typically pay in something else (computation/ communication).

The complexity of the algorithm derivation comes from the fact it arises from probabilistic graphical models where the notation of the problem is cumbersome, as it can be presented as either factor graphs or undirected graphical model. A factor graph is a bipartite graph with evidence nodes (the input) at one side and a function aggregating the nodes on the other side. It is very similar to a single dense layer in deep learning where the input is coming from the left and the summation plus activation is done on the right. However unlike deep learning the factor has only a single layer and the message propagate again back to the variable (input) nodes back and forth. So the factor graph is the grand grand father of deep learning. 

To make it totally confusing the seminal paper by Prof. Weiss uses pairwise notation which is a third way of presenting the same model. (Instead of a single linear system of equation it is a collection of multiple sets of sparse linear equations where each set has two variables only). 

Any continuous function can be locally approximated in a first order method around a point by computing the gradient. That is why we often see linear modeling when modeling complex problems, including in deep learning where each dense layer is linear. This is the relevancy of solving linear models in multiple domains. 

Another nice property of the algorithm is that besides of the marginals (the solution to the linear system of equations) we get an approximation to the main diagonal of the inverse matrix of the linear system. This is often useful when inverting the full matrix is too heavy computationally. 


Saturday, August 28, 2021

Amazing: Carbon Robotics Weeder - Deep Learning for Organic Weed Control!

 This is totally amazing: 

As someone who works on manufacturing automation with robotics and vision, I can say this is a very complicated task since the robot has to distinguish by a 2D image between the right crop and weeds. Also the laser shooting of the weeds is awesome!

After one minute of digging I found out I know Nick Kirsch who is a director at Carbon Robotics and was an executive intern in our startup Turi in 2016! This is a Seattle based company, I can't wait to talk to Nick and learn more. 

What is MBZUAI ?

 Today I found (slightly late) that Prof. Eric Xing from Carnegie Mellon MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) as their President late last year. Eric is a well known professor which I know from my CMU days, who was the CEO of Petuum, a Parameter Server like implementation for scaling up machine learning. 

From MBZUAI website: MBZUAI is the world’s first graduate-level, research-based artificial intelligence (AI) university. Launched in October 2019 and located in Masdar City, Abu Dhabi, the University aims to empower students, businesses and governments to advance artificial intelligence as a global force for positive progress. 

When reading this news I also found that the Israeli Weizman Institute is collaborating with MBZUAI for a joint AI program. This is a great fruit of the recent peace treaty of Israel and Abu Dhabi.

Another interesting organization is g42.ai which is OpenAI like org from Abu Dahbi. 

Saturday, August 21, 2021

Israeli AI21 Launches the Biggest NLP Model So Far

 AI21 is a research lab is the Israeli equivalent OpenAI, founded by several machine learning luminaries including Prof. Amnon Shashua (MobileEye, Orcam, Digial Bank) who is a Prof at the Hebrew University. (Amnon was my lecturer for the ML course, which was an amazing course and he is an amazing person as well). 

This week AI21 announced the release of the largest NLP model called Jurassic-1. It is a comparable model to GPT-3. The is no objective evaluation of the two models, but AI21 mentions that the number of word tokens used to train the models is 250K (compared to around 50K of GPT-3) which gives more flexibility in answering questions regarding common phrases, named entities etc. A great tutorial for GPT-3 is given in Yannic's Youtube Channel:



Building such a large NLP model is challenging, since the model has around 170B parameters and you need weeks of training with hundreds of GPUs, a cost that typically only the biggest companies can afford. Another interesting company I recently met is LightOn which builds photon based hardware to training language models, they recently announced the largest French based model.

It will be interesting to see when AI21 and similar companies will move to training non-English corpuses which is the place such companies can shine. 

An interesting conference coming up soon is the NLP Summit (An online event Oct 5-7).



Saturday, August 7, 2021

Yannic Kilchner - The Man and the Legend

 I recently stumbled upon Yannic's Youtube Channel and I was totally blown away. Yannic is a fresh PhD out of ETH Zurich and he has few dozens of recent deep learning papers explained amazingly well. Both the selection of papers is smart, as well as the explanation of the content. In addition for some of the papers he adds personal comments and critics about the papers claims which really make sense. The audience for those tutorials is advanced deep learning audience and they cover advanced topics which Coursera courses mostly did not catch up yet. For example great coverage of transformers for both language and image models. 

According to his LinkedIn, Yannic recently started a company along with 3 other ETH PhDs called DeepJudge which deploys deep learning NLP models in the legal domain. The company is 4 months old and according to CrunchBase raised a small seed round. 

Based on the brains of the DeepJudge team, I call all the VCs, headhunters, university recruiters and everyone else to wake up! I am pretty sure we will here a lot of those guys.