Juan Enriquez at SFI

"Are Humans Optimal?"

 

  • Historically on the planet there have been several hominins existing at a time. Right now humans are the only species of hominins.
    • Typically when there is only one species, that is a sign of impending extinction.
  • The difference between humans and Neanderthals is less than 0.004% on the genomic level.
    • Differences are in sperm, testes, smell and skin
  • There was an experiment in Russia to try and breed domesticated wild foxes. They took only the friendliest foxes and bred them amongst each other. Within a few generations they got tame and were worthy of being pets (more on that here).
  • We can now sequence and acquire genetic data 3x quicker than our capacity to store it. We’ve sequenced about 10,000 human genes today. We will start to find more differences soon.
  • Life is imperfectly transmitted code.
  • We can now build just mouth teeth (or human teeth with stem cells from a lost tooth). We can build an ear, a bladder, a trachea.
  • Homo evolutis:
    • For better or worse, we’re beginning to control our own evolution
    • This is “unnatural selection or actual intelligent design”
    • We have to live with the consequences, whether they be good or bad.
    • So far, using these technologies we have taken ourselves out of the food chain and doubled lifespans. In this respect, it’s been good for us so far.
  • While we conventionally speak about how great the digital revolution has been, the revolution in life sciences is and will be magnitudes greater.
  • Co-founded Synthetic Genomics with J. Craig Venter (One of the first to have sequenced the human genome)
    • Synthetic Genomics has developed a cell built that can operate like a computer system. It’s a cell that executes life code.
    • It may be possible to reprogram a species to become another species.
    • It’s like a software that makes its own hardware.
    • Algae is the best scalable production systems for energy development in a constrained world.
  • “We are evolving ourselves.” In science, “there are decades when nothing happens and weeks when everything happens.” (a questioner in the audience pointed out this quote comes from Lenin).
  • Q: “Do we have secular stagnation?”
    • Enriquez: A resounding no. Today there are people who are smart, creative, with scale and ambition. Lots of great things are happening in the sciences. We are as advanced as ever, and increasingly so. 1 problem is that with technology, our interest in sex different than it used to be, and sex is not keeping the developed world population moving upwards fast enough.

 

Rob Park at SFI

"Logic and Intent: Shaping Today's Financial Markets"

  • Started program trading with a spread algo between Deere and Caterpillar, under the assumption that fundamental drivers were similar and spreads will revert to mean. 
    • In executing this algo, felt orders were being copied by someone else.
  • Today, 70% of total US volume is algos.
  • How do algos introduce risks?
    • Problems occur when you can’t predict.
  • The algo ecosystem: the number of possibilities grow exponentially when algos interact with other algos.
    • 1 runaway algo problem. Example-on Amazon there was a $1 million book. Someone raised the price in marketplaces of another ever so slightly and that triggered a cascade where this book ended up listed for $20 million (the story of how this happened is fascinating and told here)
    • 2 Flash crash – unpredictable interaction of algos
  • What is an algorithm? It is a sequence of logic statements. All algos are created by humans. They do what people intend them to do. Intent=important. Humans are driven by incentives, algorithms are driven by human intent.
    • The technologist needs to understand the human goal, or else risk is introduced into the system.
  • IEX introduced a 350 microsecond delay on an order reaching the exchange.
  • The broker’s dilemma: brokers were matching orders between buyers and sellers, so brokers created dark pools. Broker A gets the buy, Broker B gets the sell, what’s the incentive for Broker A to trade with B?
  • In today’s market there are 11 exchanges, 40+ dark pools (IEX right now is a dark pool, but will try to become an exchange eventually).
  • Exchange dilemma: exchanges facilitate issuers with investors. Exchanges are supposed to be neutral to all participants, but now are for-profit companies who build services for specific customers. This is not the intended purpose of exchanges, and biases these exchanges towards one kind of participant (HFTs) over another.
  • There have been three generations of market algos so far:
    • 1 automatic traders flow, algos execute upon traders’ ideas, helping these traders focus on “their work” as opposed to execution
    • 2 gaming automatic trader-based algos. These algos took advantage of transparent inefficiencies in the first generations functionality.
    • 3 counteract generation 2. A trader who wants to buy size needs to game level two algos in order to hide intent and execute efficiently.
  • Participants send orders, but they don’t arrive at the actual exchange at the same time.
  • At the micro level, markets are deterministic (opposite of physics).
  • Latency arb—in a distributed system, race conditions matter. HFT aims to exploit the race. Exchanges need to know where the market is before pricing a transaction.  Introducing the 350 microsecond delay through a fishing-line like fiber. In doing so, assume the order is not fast. And then figure out where the market is.
  • Resistance to IEX so far has come from 2nd generation algo programmers. 

 

John Doyle at SFI

"Universal Laws and Architectures for Robust Efficiency in Nets, Grids, Bugs, Hearts and Minds"

 

  • By making things more efficient you make things worse
  • Architecture flexibility achieves what is possible
  • Heroes: Darwin and Touring, dynamics and feedback
  • Efficiency and robustness are 2 aspects we want.
    • Sustainable=robust + efficient
  • Antifragile=adaptability and evolvability. 
    • Concrete, verifiable, testable.
    • “It’s much easier to bullshit at the macro level than micro.” 
  • Robustness, efficient and adaptive. 

  • What makes us robust is controlled and acute, what makes us fragile are those same features when they are uncontrolled and chronic.

  • Robust efficiency is at the heart of these trade-offs. 
    • On the cell level, we are robust in energy and efficient in energy use.
    • Big fragilities are unintended consequences of mechanisms designed for robustness. 
    • There are tradeoffs between the two. 
    • Fragility is due to the “hijacking” of robustness.
  • In the human transition to bipedialism, we became four times more efficient at running distance than chimps, but chimps are faster, better off in the shorter distances.
    • Similarly, if we go on a bike, we are 2x as fast as walking, but more fragile. 
    • Further, we can’t simply “add” a bike to ourselves to gain this speed. 
    • We must add the bike + learn how to ride it.
  • There was a visual demonstration, but for the purposes of these notes: imagine there is a wand that can get smaller or larger (or even better, try this with a pen). 
    • You can either hold it in your hand downwards, or balance it on top of your hand upwards (the balancing upwards is nearly impossible with the pen, though that’s part of the point).
    • Down is easy to control, up is hard and destabilizing. 
    • Up and looking away (ie don’t look at your hand, but look elsewhere entirely) is nearly impossible.
    • Gravity is a law. 
      • When we hold the wand downward, gravity is stabilizing. 
      • Stabilizing insofar as it holds it steady and straight. Gravity is destabilizing when holding it up.
    • Down=the easiest, up=harder, up and short want=the hardest (that’s why you can’t balance the pen upwards!).
  • We can look at the entropy rate exp(pt).  This explains quantitatively something qualitatively through a law.
  • Fragility depends on function (balanced movement in the case of the wand) and specific perturbation. 
  • There are hard tradeoffs between optimal lengths, but looking away is simply bad design.
  • Without an actuator, variability or extreme variability brings a crash imminently.
  • Markets are robust to prices, fragile to all else. 
    • For robustness, we want them to be fast and flexible, but these features cause the fragilities.
    • Much of nature is built on layered architecture between fast “apps” and robust hardware.
    • There are often horizontal transfers from one architecture to another, but only occasional novelty (think about the passing of genes vs the creation of new genes entirely; or similarly the passing of ideas from one discipline to another vs the discovery of novel ideas entirely). This accelerates evolution.
    • Such a system is fragile to exploitation. The more monoculture, the more this is amplified.
    • Our greatest fragility as a society are bad memes. People believe false, dangerous, unhealthy things.
    • These features are shared architectures between genes, bacteria, memes and hardware.
  • Hold your hand in front of your face. Move your hand back and forth real fast until the image blurs. Then hold your hand still, and move your head back and forth real fast until your hand blurs. (do this before reading on)
    • Notice that when you turn your head real fast it’s very challenging to get the hand to blur. This is because we have what is called the vestibular ocular reflex.
    • The illusion of speed and flexibility has been tuned to a specific environment. The head is automatically stabilized to see the hand clearly while moving. This is all happening subconsciously in the cerebellum.
  • There was another demonstration using colored circles that were adjacent at the midpoint of a screen. The slide was quickly switched and the color lingered for a while in your vision. (I was so intrigued by this, I did some googling afterwards and found the term afterimages. While I could not find the exact demonstration, this one using the American flag is quite cool and gives a sense of the effect covered for the following few lines).Color is the slowest transition. We don’t truly see in color, we simulate it.
    • This is a slow, inflexible, but cheap system (it doesn’t use a lot of resources)
    • It’s tuned to a highly specific environment, so we don’t notice it (it feels totally natural to us)
    • It is fragile to some environments, like the afterimage, but hopefully we don’t encounter that fragility in a context where it can hurt us.
  • Learning generally speaking is slow, so we have to evolve reflexes to go fast.

 

Cris Moore at SFI

Cris Moore
Optimization From Mt. Fuji to the Rockies: How computer scientists and physicists think about optimization, landscapes, phase transitions and fragile models
  • We need to make qualitative distinctions between problems
    • There is a Hamiltonian Path Problem—can you visit every node on a graph just once
    • You can do this by a “search tree” until you end up stuck. Go back to the prior node, then begin again. This is called “exhaustive search.
    • ”There is reason to believe that exhaustive search is the only way to solve such a problem
  • NP (complete)
    • P: polynomial-can find solutions efficiently
    • NP: we can check a solution efficiently
    • There is a gap between what you can check vs what you can find efficiently. This is the P vs NP problem.
    • Polynomials don’t grow too badly as n grows, but NP complete, if n grows at 2^n then when n=90 it takes longer than the age of the universe to solve.
  • When is there a shortcut? 
    • 1: divide and conquer—when there are independent sub-problems. 
    • 2: dynamic programming—sub-problems that are not completely dependent, but become so after we make some choices. (ie there are n nodes to the experiment, but once you take on node the next step becomes independent of the prior choices).
    • 3: When greed is good—minimum spanning tree. Take the shortest edge (ie if you want to build power lines connecting cities, build the shortest connections first until a tree is built).
      • Landscape (imagine a mountain range where your goal is to get to a distant highest peak) a single optimum, that we can find by climbing (no wrong way).
      • Traveling salesman—big shortcuts can lead us down a primrose path. 

  •  
    •  
      • There are many local optima where we can get stuck and it’s impossible to figure out the global optimum
  • NP completeness=a worst case notion.
    • We assume instances are designed by an adversary to encode hard problems. This is a good assumption in cryptography, but not in most of nature. We must ask: “What kind of adversary are scientists fighting against…Manichean or Augustinian?”
    • The real world has structure. We can start with a “greedy tour” and make modifications until it becomes clear there is nothing more to gain.
  • Optimization problems are like exploring high dimensional jewelry (multi-faceted)
    • Simplex—crawls the edges quickly. Takes exponential amounts of time in the worst case, but is efficient in practice.
    • 1: Add noise to the problem and the number of facets goes down
    • 2: “Probably approximately correct”—not looking for the best solution, just a really good one
      • Landscapes are not as dumpy as they could be. Good solutions are close to the optimum, but we might not find THE optimum.  If your data has clusters, any good cluster should be close to the best.
      • There are phase transitions in NP Complete (what are called tipping points)
    • 3: “Sat(isfiability)”—n variables that can be true or false and a formula of constraints with three variables each.  With n variables, 2^n possibilities. We can search, see if it works.  
      • What if constraints are chosen randomly instead of by an adversary? When the density of constraints is too high, we can no longer satisfy them all at once.
  • There is a point of transition from unsolvability to solvability
    • The hardest problems to solve are in the transition.  When a problem is no longer solvable, have to search all options to figure that out. 
      • Easy, hard, frozen—the structure gets more fragile the closer you get to the solution.
  • Big data and fragile problems:
    • Finding patterns (inference)
    • You actually don’t want the “best” model, as the “best” gives a better fit, but is subject to overfitting and thus does worse with generalizations about the future. 
  • Finding communities in networks (social media) is an optimization problem. You can divide into two groups to minimize the energy, but there can be many seemingly sensible divisions with nothing in common.
    • You don’t want the “best” structure, you want the consensus of many good solutions. This is often better than the “best” single solution.
    • If there is no consensus then there is probably nothing there at all.
  • The Game of Go: unlike chess, humans remain better at Go than algos (this is true of bridge too).
    • There are simply too many possible options in Go for the traditional approach with explores the entire game tree (as is done in chess).
    • To win, an algo has to assume a player plays randomly beyond the prediction horizon and recomputed the probability of winning a game with each move.
      • This incentivizes (and rewards the algo) for making moves that lead to the most possible winning options, rather than a narrow path which does in fact lead to victory.
      • The goal then becomes broadening the tree as much as possible, and giving the algo player optionality.
      • “Want to evolve evolvability” and not just judge a position, but give mobility (optionality). This is a heuristic in order to gain viability.

Nassim Taleb at SFI

Nassim Taleb

Defining and Mapping Fragility

 

  • Black swans are not about fat tail events. They are about how we do not know the probabilities in the tail.
  • The absence of evidence vs evidence of absence is very severe
    • Too much is based on non-evidentiary methods
  • Financial instruments (options) are more fat-tailed than the function suggests
    • P(x) is non-linear
    • Thus the dynamics of exposure are different than the dynamics of the security
    • To that end law of large numbers doesn’t apply in options
  • “Anyone who uses the word variance does not trade options”
    • The measure of a fat tail is a distribution’s kurtosis
  • There was a great chart of 50 years of data across markets
    • In the S&P in particular, 80% of the kurtosis can be represented by 1 single day (1987 crash)
    • This would not converge in your data studying a broad look at the S&P
    • One  can only talk about variance if the error coefficient of the variance is under control
    • In Silver, 98% of its 50-year variance comes from 1 observation
  • EVT-extreme value theory is very problematic because we don’t know what the tail alpha is.
    • In VAR, a small change can add many 0000s
    • There is no confidence at all in the tails of these models
    • The concentration of tail events without predecessors means that such events do not occur in the data. Tails that don’t occur are problematic.
  • A short option position pays until a random shock. Asymmetric downside to defined, modest upside. This bet does not like variability (dispersion), volatility.
  • Look at the level of k (believe kurtosis??) and see sensitivity to the scale of the distribution. This is fragility.
    • Volatility = the scale of the distribution
    • The payoff in the tail increases as a result of sigma
  • If you define fragility, you can measure it even without understanding the probabilities in the tail
    • Nonlinearity of the payoff in the tail means that the rate of harm increase disproportionately to an instance of harm
    • What is nonlinear has a negative response to volatility
  • Fragility hates 2nd order effects. For example: if you like 72 degree room temperature, 2 days at 70 degrees is better than 1 at 0 and the next at 140.
  • Lots of nature demonstrates “S” curves
    • In the convex face of the s-curve, we want dispersion. In the concave face we do not (stability)
  • How to measure risk in portfolios: takes issue with IMFs emphasis on stress tests looking at a “worst” past instance, which is a stationary point in time.
    • Dexia went out of business shortly after “passing” such a stress test
    • Solution: do 3 stress tests and figure out the acceleration of harm past a certain point, as conditions get worse. 
      • We should care about increasing levels of risk, not degree
      • Risk increases asymmetrically, so if the rate of acceleration is extreme, this is stress.
  • Praised Marty Liebowitz for “figuring out convexity in bonds”
  • “Convex losses, concave gains --->thin tials ---> robust”
  • Antifragile=convex, benefits from variability
  • Can take the past to see the degree of fragility. You get more information and more measurable data from something that went down and then back up in the past, than something that went down and stayed at 0. 
  • Adding information is concave (N). Convex is when we add dimensions (D), spurious correlations increase.
    • There is a large D, small N problem in epidemiology.
    • NSA is one of the few areas that uses data well, but this is so because they are not interesting in many things, only the few that have value to what they’re trying to do.
  • PCA analysis, variations are regime dependent. 
  • We can lower nonlinearity of a price (buying options) 
    • Hard to turn fragile into anti-fragile, but can make it robust (tea cup can put lead in it).
    • Robust requires the absence of an absorption barrier – no O, no I in transition probabilities. Don’t stay or die in a specific state.
  • “Small is beautiful”
  • Q that “VAR is the best we have”:
    • A pilot says on a flight to Moscow: “We don’t have a map of Moscow, but we do have one of Paris.” You get off that plane. We don’t take random maps for that reason, and same logic applies to VAR.
    • Using VAR under this logic is troubling because it encourages people to take more risk than they really think they are taking. They anchor to the probabilities of VAR, not reality.
  • Liquidation costs are concave. There are diseconomies of scale from massive size.