SparkBeyond crawled hundreds of billions of Internet pages, papers, patents and social media site to build one of the largest available knowledge graphs. Based on this data it is possible to ask natural language questions about the knowledge and get aggregated knowledge summary. Unlike Google search where you have to manually go over of zillion resources here the data is summarized and aggregated visually. It is possible to understand reasons, trends, ask for follow up questions and see supporting evidence and statistics.

Unlike the typical language model which gives you a summary without knowing where the data was obtained from, In SparkBeyond;s model it is possible to get detailed references show where is the answer coming from.

An interesting related work is Colbert from Prof. Matei Zeharia. Intead of memorizing the full language model using hundreds of billions parameters a significantly smaller index is maintained that retrieves the relevant information on the fly,

I found some recent news about Colossal a new startup that wants to revive extinct Mammoth to fight the global warming. Fighting global warming is one of the best things we can do, especially that one of the co-founders is Prof. George Chruch from Harvard Medical School, a very credible authority on gene editing. Church is one of the inventors of Crispr, a gene editing tool that can cut and paste any desired segment of the DNA and thus make whatever changes we like to do.

Here is my take on it:

Their website is amazing, a lot of effort was invented on that front. Backing up the pretty wild idea and thus draws a lot of attention to this work. The raised amount of 15M$ is tiny considering the amount of lab effort, equipment, materials etc.

Global warming sounds like an awkward excuse to fund the research they really like to do. Ben Lamm, CEO of Colossal, told The Washington Post in an email that the extinction of the woolly mammoth left an ecological void in the Arctic tundra that Colossal aims to fill. The eventual goal is to return the species to the region so that they can reestablish grasslands and protect the permafrost, keeping it from releasing greenhouse gases at such a high rate.

Sending a wild Mammoth to eat grass somewhere frozen, with the hope of reducing gas emissions is likely is the most complicated way to fight global warming I can imagine. But is a sexy way of drawing news attention.

The difference between Mammoth DNA and a person DNA is most likely 90% similar. Thus having the ability to revive and extinct Mammoth will enable also reviving also persons. Recently, Israeli research hash shown the possibility of raising mice embryos outside the womb. So raising Mammoth outside the womb as they like to do is maybe doable.

Christopher Preston, a professor of environmental ethics and philosophy at the University of Montana, questioned Colossal’s focus on climate change, given that it would take decades to raise a herd of woolly mammoths large enough to have environmental impacts.

So, the real applications of this technology may be applied to humans. For example, what if I wanted to revive my dead grandfather? What is I wanted a baby with blond hair and blue eyes? My guess there is a huge market for this technology in real life.

I wonder why all the news and media attention ignores the actual use cases of this tehnology?

A nice and recent paper from Lior Wolf's lab at Tel Aviv University: https://arxiv.org/pdf/2103.15679.pdf by Hila Chefer, Shir Gur and Lior Wolf. The problem is very simple: given a transformer encoder/ decoder network, we would like to visualize the affect of attention on the image. While the problem is simple the answer is pretty complicated: we need to take into account attention matrices from mutliple layers at once. The paper suggests an iterative way to add up all those attention layers into one coherent image.

Figure 4 shows that the result is very compelling vs. previous art:

top row is the new paper and bottom row is work for comparison.

I have stumbled upon this nice tutorial: which interactively visualizes Gaussian Belief Propagation. What is nice about it that the authors spent time to make an interactive tutorial that you can play with.

As a grad student I was totally excited about Gaussian Belief Propagation and spend a large chunk of my PhD thesis on it. In a nutshell it is an iterative algorithm for solving a set of linear equations (for a PSD square matrix). The algorithm is very similar to Jacobi iterative method but uses second order information (namely approximation of the Hessian) to improve on convergence speed at the cost of additional memory & computation. In deep learning terminology this is related to adding Adam/ Momentum/ Admm etc. From personal experience, when people get excited about speeding up conference of iterative algorithm they completely neglect the fact here is no free lunch: when you speed convergence in terms of number of iterations you typically pay in something else (computation/ communication).

The complexity of the algorithm derivation comes from the fact it arises from probabilistic graphical models where the notation of the problem is cumbersome, as it can be presented as either factor graphs or undirected graphical model. A factor graph is a bipartite graph with evidence nodes (the input) at one side and a function aggregating the nodes on the other side. It is very similar to a single dense layer in deep learning where the input is coming from the left and the summation plus activation is done on the right. However unlike deep learning the factor has only a single layer and the message propagate again back to the variable (input) nodes back and forth. So the factor graph is the grand grand father of deep learning.

To make it totally confusing the seminal paper by Prof. Weiss uses pairwise notation which is a third way of presenting the same model. (Instead of a single linear system of equation it is a collection of multiple sets of sparse linear equations where each set has two variables only).

Any continuous function can be locally approximated in a first order method around a point by computing the gradient. That is why we often see linear modeling when modeling complex problems, including in deep learning where each dense layer is linear. This is the relevancy of solving linear models in multiple domains.

Another nice property of the algorithm is that besides of the marginals (the solution to the linear system of equations) we get an approximation to the main diagonal of the inverse matrix of the linear system. This is often useful when inverting the full matrix is too heavy computationally.

As someone who works on manufacturing automation with robotics and vision, I can say this is a very complicated task since the robot has to distinguish by a 2D image between the right crop and weeds. Also the laser shooting of the weeds is awesome!

After one minute of digging I found out I know Nick Kirsch who is a director at Carbon Robotics and was an executive intern in our startup Turi in 2016! This is a Seattle based company, I can't wait to talk to Nick and learn more.

Today I found (slightly late) that Prof. Eric Xing from Carnegie Mellon MBZUAI (Mohamed bin Zayed University of Artificial Intelligence) as their President late last year. Eric is a well known professor which I know from my CMU days, who was the CEO of Petuum, a Parameter Server like implementation for scaling up machine learning.

From MBZUAI website: MBZUAI is the world’s first graduate-level, research-based artificial intelligence (AI) university. Launched in October 2019 and located in Masdar City, Abu Dhabi, the University aims to empower students, businesses and governments to advance artificial intelligence as a global force for positive progress.

When reading this news I also found that the Israeli Weizman Institute is collaborating with MBZUAI for a joint AI program. This is a great fruit of the recent peace treaty of Israel and Abu Dhabi.

Another interesting organization is g42.ai which is OpenAI like org from Abu Dahbi.