IPOs; Clean meat

Startups grow, they flourish; they are like flowers, they need nourishment, they need support. Some grow and make the world beautiful, but many die while still showing the glimpse of new ideas. The entrepreneurship culture has the power to change the world and the ones who have already been recognized offer their shares to the public with their IPOs so that you can take part too.
 
And as I am a Vegan trying to raise awareness about our environment, human and animal rights, I am so glad to see the success of “Beyond Meat” – they are a plant-based meat producing company. See their success (https://www.washingtonpost.com/business/2019/05/02/beyond-meat-plant-based-food-company-readies-ipo/). If you are not aware of the clean meat research and movement watch this video (How clean meat will change the food industry – https://www.youtube.com/watch?v=Mi5EfhPGHg4). The food industry tries to demonize clean/cultured meat by calling it “Fake”, “Artificial”, “Less masculine” etc. unfortunately, as if we live in an all natural world (Don’t get your heart transplant because it’s not natural right!!, don’t use spectacles!! because God has given you the eyes you should be using.. don’t have abortion even if it’s unwanted pregnancy in the earliest stage.. cause embryos are as formed as real human babies.. so much incoherent bulshits ! Tired of the nonsense really!!!)
 
Some IPO(Initial Public Offering)s to keep under your watch if you are into the stock market. For US residents, Robinhood(https://www.robinhood.com), the commission/feeless trading app, is great. In fact, Robinhood after acquiring 5 Million users, itself, is launching its IPO which is not listed in this link.
 
Advertisements

We will have to save our world

 

This kid is telling us, adults with brains, about climate change because we apparently forgot how to think and shut our eyes to see ! And then when I listen to those adults denying climate change, my jaw drops; specially those big fat ones in the parliaments of various countries, from even specifically including the self-proclaimed world police withdrawing from the United Nations Framework Convention on Climate Change. Most parents are raising their kids; most adults are basically hating their jobs, in coming days probably won’t even find jobs; some adults are trying to invent every form of entertainment so that they can escape the reality – it seems like everybody already deluded each other believing that we are going to live in the same world that we were born somehow. We are living in the fourth but the biggest technological revolution- the internet, the social media, the money revolution, the artificial intelligence, mars exploration in one hand, on the other hand the decimation and destruction of our own planet and atmosphere by our own greed. Parents have no idea that the schools, colleges they send their kids probably won’t help their kids find any jobs as the way Google, Amazon are developing Artificial Intelligence and the automation is taking away jobs from human hands and our schools have no clue about future but to teach the same books they taught our grandparents, and then on our lands on which we build our homes, probably the most habitable lands, will no longer be available in most areas due to natural hazards resulted by climate change. Countries are still trying to steal oil from each other, fossil fuel industries are still lobbying so that they can make some extra dollars by going against green renewable alternatives, rainforests are being destroyed so that the overpopulated earth can eat more and more; but then when the cyclones and tsunamis will rise from the ocean, when the earthquakes will just decimate acres of lands, displace millions of people and force them to migrate and cause wars and battles as our vague nationalism and capitalistic self-cyclic greed are just making us more selfish and self-righteous and less compassionate day by day; when the melted ice and glaciers and the consequent sea level will just level everything within few decades; I don’t know where will all these entertainment politics and mudslinging and worrying about the smallest little things, fighting over the silliest topic on the outdated television channels will go. To be pessimistic, may be, the nature will just blow up the earth with all our interesting circus inside; just like a cruel boy uses his cruel lazy feet to destroy an ant colony that those small, little, hard working ants have built little by little, after sweating for days and nights. To be optimistic, I guess, we will have like some child genius activist like this school kid who will motivate all of us to do the right thing, to take the actions necessary, to start loving our planet, to enjoy living together peacefully with all the other species. I don’t want to live with horrors, rather I want to live in a world full of optimism and energy and with the sense that I am working towards making the world better than the world I was born. I was not aware before because I didn’t have the information, I didn’t know how our world is shaping into a nightmare because of climate change. This little kid talks about how she doesn’t want to be a climate scientist to discover another issues for the future that will not exist because of what we are doing now and now and now. We can change and we will have to change. She shows how we can change, how we can take actions to save our mother earth for our own selves. And it starts by understanding what we have done wrong, and how we can do right- like may be, we should reduce burning fossil fuels, may be we should not elect politicians who are ignorant about climate change, artificial intelligence, may we should start using electric cars to try to reduce the emission of carbon dioxide, may be we should start using renewable energy sources, so that the new renewable energy companies can florish, may be we should be vocal against industrial farming so that the greenlands, forests, rainforests don’t get destroyed, may we should take care of our ecological niches, may be we should go vegetarian or vegan so that we consume less animal products, which will reduce industrial farming which is a significant cause behind greenhouse gas emission, may be we should rethink about other kind of jobs which can employ people who are dependent on the jobs that again are destructive towards our planet. We should live in the world of today and enjoy it in a way so that the world can florish. May be, we should learn to love our world. Please educate yourself about climate change.

The boy who harnessed the wind

This Malawi movie “The boy who harnessed the wind” blew my mind. As a scientist of the future, it’s always hard to explain in simple terms what a scientific mind does. In school, we go and learn all those physics, chemistry, math, calculus, draw all those complex looking pictures and then we forget what all these fucking science jargons are and what they are for. A lot of us even start hating science as they don’t find it relevant or their lack of understanding frightens them. And for a lot, it goes against their own agenda! But it’s all about our survival on earth and our progressive understanding of how nature works- we sometimes forget. Science provides the best evidence based tool that has transformed our society. This little boy saved the life of hundreds in his small village by creating a simple wind mill where people were dying, starving , stealing and killing each other for food. Nature didn’t pour rain upon them for years and they were not being able to grow crops. He was expelled from the school because his dad couldn’t pay for the fees. But he was running through the wind, he knew the power of the wind, he discovered how his science teacher’s bicycle has a light which is powered by the manual rotation of the bicycle wheel using a Dynamo. He then could use a fan to power a radio, he could then convince his dad to scrape their only bicycle to make a windmill that saved lives of the villagers. Social entrepreneurship. But against everybody’s will, against everybody’s judgment. He experimented in hand, he sneaked into the school library and learnt the science. In the core, he wanted to solve a problem. The problem of hunger. And it saddens me when thousands of people mindlessly cut trees, take actions for their own selfish greed and lead this and the next generation to disaster. We are seeing all with climate deniers, flat Earth believers, with the science of genetics and so many others. Most people go against something new that they haven’t seen before, they are bad at anticipating. It takes few scientific minds but I wish people were more open to simply explore the new.

The Ancestor’s Tale

Some brief ideas, that I had to answer from my reading professor Richard Dawkin’s The Ancestor’s Tale book. Some answers may be incomplete.
(1)  Briefly, what is evolution?

Ans:   The process by which different kind of living organisms have developed and diversified from their earlier forms during the history of the earth. Evolution is generally a gradual development towards something complex from something simple. Evolutionary history can be represented as branching from one species to other species.

 

(2)  There is geologic time, which relies on the rock record; and “absolute” (or numerical or “linear”) time, which relies on calibrated clocks.

Which one was emphasized most in “The Ancestor’s Tale” when discussing the events in the history of life, and Why?

Ans:   “Absolute dating” (Putting absolute age on the fossils using Radioactive dating, Dendrochronology, Paleomagnetism, Molecular clock) is emphasized. Geologic dating requires vertical piecing together of rocks from different parts of the world and then determine ages like solving puzzles. The different rendezvous ages are determined by absolute age.

(3)  The fossil record WAS used for the dating of major divergences of types of life, and the “beginning to extinction” range of a family.

What are some general problems with relying entirely on this fossil record?  [Hint: Explain why “lowest known occurrence” of a new family type is usually an underestimate; and similarly “highest known occurrence”.]

What major types of abundant multi-cellular life would not leave a useful fossil record?

….. Ans: There are lots of gaps or incomplete records in the collective fossil records. Moreover, not every type of organism gets fossilized. Relying only on the fossil record can lead to false conclusion. For example, if we only found large, big-boned dinosaur fossils and no small-light boned vertebrate, can we conclude that only large animals lived there? We can not, because light boned vertebrates did not turn into fossils.

.              .Most of the time only bones, shells and teeth get fossilized. The mazor types of abundant multi-cellular lives are Amphibians, Annelida, Mollusca which don’t leave any fossils as they are soft bodied and have delicate or no bones. We only learn of their existence with fossilized burrows, trackways or impressions left on the soft sediment.

(4)  “Molecular clock” is NOW used for the dating of major divergences of types of life (including some of the Ancestor’s Tale estimates, especially for soft-bodied life that doesn’t leave shells in the sediment record).

What is this concept of a “molecular clock”,  and What are some important major requirements and assumptions for the “clock” part?

What major types of abundant multi-cellular life would not leave a useful molecular-clock record?

Ans:   This technique uses the count of discrepancies in molecular sequences between surviving species.

The assumption is that the close cousins with recent common ancestor have fewer discrepancies. After the counting, scientists calibrate the arbintrary timescale of molecular clock and translate it into real years. They use known available fossil age for a few key branch points in the calibration process.

(5)  Molluscs has twice as many species as vertebrates; but, even these are minor compared to another type of animal in diversity.

What type of life has at least 3/4ths of all animal species, and probably dominated the land since the Silurian?

What does that fact, plus related ones imply about the famous “family diversity through time” plots that are common in every textbook?

Ans:  Arthropoda (insects, crustaceans, spiders, scorpions, centipedes, millipedes etc.) has at least 3/4ths fo all animal species. With their exosecleton and jointed leg and evolved segmented body, they dominated the land during Silurian (430 Ma).

The diversity of families and species have started from Cambrian, then dropped a bit, then it steadily grow throughout Ordovician before the end Ordovician mass extinction, then again grew through Silurian and Devonian before the end Devonian extinction. Most Paleozoic fauna were at these periods and most of them were arthropods. After end Ordovician mass extinction, there was no sudden drop, rather stable before the P/T boundary. And then the modern fauna started to grow.

(6)  Ediacaran (ca. 590 Ma) = “the biggest split of animals” into “Protostomia” (“mouth first”) and “Deuterostome” (“mouth second”), based on the divergence in the way the embryo develops.  All animals fall in one or the other; and we are a “Deuterostome” branch.  It is easy from the Ancestor’s Tale book to know what are in the Deuterostome group.

But what are some main organisms in the Protostomia group?

Ans: The main organisms in Protostima groups are Athropods, Mulluscs, flat worms, round worms, annelid worms, Nematode, Brachyozoa,  Bryozoa etc.

 

(7)  earliest Silurian (ca. 440 Ma) = “a big one” split of “teleosts” from what would become amphibian-reptile-mammal (after a minor split-away of Coelacanths).  Other than the fact that these are “ray-finned fish” (all fish that are not shark-ray); what is the major life-style divergence?

What does this imply about what may have been happening in the sea/land and the atmosphere environments?  [Note: think about what had to change to allow the split between the fish and the lungfish-tetrapod groups.]

Ans: Sharks mainly live in sea water. The Ordovician period had cold sea. During Silurian, we got shallow warm water, high mountain lakes, acid stream, marshes, saline lake rivers. These teleosts (Ray finned fish) managed to live in these diverse habitats.  Teleosts lived in many levels of underwater, both salt and fresh water. Sharks always had to maintain the same level in the water, they didn’t have bone, rather only cartilage. But ray finned fish have bones. The descendant of rendezvous 19 was the lung fish. Humans, tetrapods, coalecant, lungfish all evolved from love-finned fish.

 

(8)  late Carboniferous (ca. 310 Ma) = big split of amniotes (tetrapods producing membrane-eggs, even if some develop inside the body) into “sauropsids” (I hate that obscure name for the group) and “mammal-like reptiles” (a name that Richard Dawkins hates, so he suggests “mammal-like reptiles”).

What are the apparent main distinctions (features that imply the separation) between the earliest “reptile-like mammals”, such as “Dimetrodon” from the “sauropsids”?  [Note: consider what is grouped into the large “sauropsid” cluster.]

 Ans: Pelycosaurs like Dimetrodon are less mammal like the other branch where Therapsids evolved into Cynodont and then into modern mammals. Dimetrodons were more lizard like, sprawled their bellies, legs were spreaded in both sides. However, these reptile like mammals gradually raised their bellies progressively higher off the ground, legs more vertical and gradually evolved into modern mamals. Distance from the ground is probably the biggest difference.

 

(9)  Cretaceous = Mammal divergence (ca. 100 Ma).  An important part of “mega-evolution” is isolation, then re-mixing.  We looked at “World’s apart” evolution on isolated New Caledonia and New Zealand after they split from Gondwana.  However, Pangea supercontinent also split into South America, Laurasia (joined North America and Europe), Africa and Australia during the Cretaceous as the South Atlantic and the Antarctic oceans opened.  [Later, due to closure of Panama strait, the South America and Laurasia fauna mixed; but not Australian; and when Arabia closed with Eurasia, there were migrations into and out of Africa.]  These 4 large landmasses underwent different evolutionary paths, that are grouped as “Laurasia-theres“, “Afro-theres“,”Xenarthrans“, and “Marsupial/Monotreme” mammals.  Essentially, mammal evolution went into four paths during the last days of the dinosaurs, and the survivors of the end-Cretaceous catastrophe evolved to occupy the new niches independently in each regions.

Briefly describe the range of types in each of the “Laurasia-theres”, of “”Afro-theres”, of “Xenathrans” and of “Marsupials”–  that together comprise these 4 main evolutionary groups of placental mammals.

Ans:  Laurasia-theres include a) shrews, b) bats, c) camels, pigs, deer, sheep, hippos, whales, d) Horses, tapirs, rhinos, e) cats, dogs, bears, weasels, hynas, seals, walrus

           Aftrotheres include  elephants, manatees, dugongs, aardvark, golden moles, elephant shrews.

Xenarthrans include sloths, armadillos.

Marsupials include American and true opossum, shrew opossum, marsupial mole, Tasmanian devil, numbat, Manito del monte, Wombat, Kangaroos, possums, koala.

 

(10)  Follow-up to the previous:  As a speculation, before Africa contacted Eurasia to mix with the “Laurasia-theres” — What major differences would there have been with the African ecology?  [for example, What would have been the “top predator” and the most abundant herbivore?]  Why?

Ans:  Africa didn’t have the carnivores, mainly herbivores. Carnivores came into aftrica from great northern continent of Laurasia. The trees were higher, elephants, giraffes were the herbivores which  had to reach high from the ground and therefore developed the large tall bodies. So, Africa was greener and habitable for the herbivores to diversify, where in Eurasia, carnivores depend on their preys.

 

(11)  Another follow-up to the previous:  What does this “World’s Apart” concept imply about using diversity indices for larger land organisms?

Would you expect similar aspects for shallow-marine life, which dominates the fossil record?  Why?

Ans:   World’s apart means isolation. In the case of Australia, giant island like Madagaskar, Galapagos, in isolation, from few small population of animals, a new class of diversified animals have come into existence. Even in shallow marine life, from fish or marine individual’s perspective, there are small islands of habitats where organisms grow and evolve. Possibly the lack of external predators help the diversity.

 

(12)   Paleogene (ca. 50 Ma) – Monkey divergence.  There are “Old World Monkeys” (Africa-Eurasia), “New World Monkeys” (originally only South America) and “Lemurs” (mainly Madagascar).  When these three monkey-types diverged, there was no land connections, and monkeys don’t swim.

Which place seems to be the evolutionary “home” of the original monkeys; and

How did those ancestors of monkey types then inhabit (and evolve to fill niches) in the other 2 regions separated by ocean seaways?

Ans:   Africa seems to be the evolutionary home of the original monkeys.

            West Africa and south America were close. There were possibly chains of islands in between.. A small founding population of monkeys probably rafted across, perhaps on fragments of mangrove swamps which kept them alive and then they floated from Africa to south America. They were lucky to have the sea current towards the right direction.

 

(13)  Briefly explain Why a “massively improbably event” is often nearly inevitable in geologic time.

Ans:   Extremely improbable events are common in nature. Given enough opportunities, the probabilities of even very unlikely events can mount up to be almost certain. For example, the probability of one particular mutation during evolution may be very little, but there are billions of mutations happening continuously with the help of natural selection. So, therefore one single mutation is not very unlikely. David Hand proposed his improbability principle which is the combination of laws of truly large numbers, laws of inevitability and the laws of near enough. The law of near enough tells that we tend to identify similar but not identical event as identical event. It’s also a perception issue as we tend to pay attention to the improbable things that do happen and completely ignore the even large number of improbable things that does not happen. So, in nature, one in a million event happen all the time. When we are looking at geological time, where we mostly care the span of millions or billions of years, massively improbably event is thus easy to find.

 

(14)  Neogene (ca. 15 myr) – Ape families “out of ?where?”.  Dawkins explores the divergence between early hominids, chimpanzees, gorillas, orangutans, and gibbons; and the problems in explaining why some of them have fossils only in Eurasia, whereas others have fossils only in Africa (and neither has a complete fossil record in either location).  He then contrasts “out of Africa” with “out of Asia” and other debate ideas.

Briefly summarize his preferred model (with which direction) for how small “founding populations” of apes and their relatives seem to have migrated between Africa and Asia, and then evolved and diverged.

Ans:   Dawkins support the model of “Hopping to Asia from Africa and then back again to Africa from Asia”. It matches with the facts of drifting continents and fluctuations of sea levels. Steward Disotell created the family tree and if the tree is correct then according to this model

  1. A population of ape migrated from Africa to Asia around 20 Ma and became all the Asian ape including the living gibbons and orang utans
  2. A population of apes migrated back from Asia to Africa, became todays African ape (Gorillas, Chimanzees, Humans) including us.

 

(15)  Quaternary (ca. 2 myr) – Hominids.  What is the main “defining difference” between “Australopithecus” and “Homo” genus of hominids?

For the “Homo” species, what is the evidence that there were at least twoout of Africa” events?

Ans:   The main difference is in the size of the brain. Compared to Austalopithecus, the brain started to expand beyond the normal size. It’s more of the brain size to body ratio than just the brain size.

 

(16)  Late Pleistocene (ca. 40 kyr) — Humans and cousins.  Could Neanderthals talk to each other?

What was the “Great Leap Forward” for Homo sapiens sapiens?

Ans:   Neanderthals are our closest cousin and they might have spoken with each other. They had social communities with complex lifestyle. They had clothing, jewelry and even bigger brain than ours.

           Jared Diamond coined the term “Great Leap Forward” which basically points to the flowering and flourishing of human mind which drove the evolutionary process. All the modern cultural, industrial changes have happened very rapidly. Before 40K, the things were very slow. The development of intellect in the brain was a great leap for mankind. Some points to the origin of language or written language, but Dawkins and Steven Pinker argues that the great leap may not be the language as the history of language seem older. It may be how we started using our brain critically or the ability to store or extrapolate or predict future.

 

(17) In retrospect, what is required for surges in evolution of new types of life?

How can we use databases of first/last fossil ranges to try to identify such surges, and their potential causes?

Ans:   I guess, appropriate environment, genetic mutation, possible interbreeding.

Methods in Fourier Spectral Analysis

Fourier Transform

It was a significant discovery in mathematics that any function can be expanded as a sum of harmonic functions (sines and cosines) and the resulting expression is known as Fourier series. A harmonic of repeating signals such as sunusoidal wave is a wave with a frequency that is a positive integer multiple of the frequency of the original wave, known as the fundamental frequency. The original wave is called the first harmonic, the following harmonics are known as higher harmonics. Any function can also be expanded in terms of polynomials and the resulting expression is known as Taylor series. If the underlying forces are harmonic and there possibly exists some periodicity, then the use of harmonic series is more useful than using polynomials as it produces simpler equations. It is possible to discover a few dominating terms from such series expansion which may help identify the known natural forces with the same period.
Let the symbol h(t) represent a continuous function of time. The Fourier transform is a function of
frequency f.

H_T(f) = \int_{-\infty}^{\infty} h(t) e^{2 \pi i f t} dt
e^{2 \pi i f t} = cos(2\pi f t) + i sin(2\pi f t)

The amplitude and the phases of the sine waves can be found from this result. Given data h(t), we can find the Fourier transform H(f) using Inverse Fourier transform.

h(t) = \int_{-\infty}^{\infty} H_T(f) e^{-2 \pi i f t} dt

The spectral power P is defined as the square of the Fourier amplitude:

P_T(f) = |H_T(f)|^2

However, real data does not span infinite time and most likely be sampled only at a few discrete points over time. Suppose that, we received values of h(t) at times t_j, then an estimate of the Fourier transform is made by using summation. The inverse transform is also shown using the summation.

h_j \equiv h(t_j)
H(f) \equiv \sum_{j=0}^{N-1} h_j e^{2 \pi i f t_j}

The data are desired to be sampled from equally spaced time as nice statistical properties are available in such regular case. If the interval between equally spaced data points is \Delta t, then the highest frequency that will appear in the fourier transform is given by the Nyquist-Shannon sampling theorem. The theorem states “If a function f(t) contains no frequencies higher than f Hz, then it is completely determined by giving its ordinates at a series of points spaced \frac{1}{2f} seconds apart”. Therefore, the Nyquist frequency (highest frequency) is given by the following equation.

f_N = \frac{1}{2\Delta t}

The lowest frequency is the one that gives one full cycle in the time interval T. The other frequencies to evaluate is the multiples (f_k) of the low frequency f_L. And, also we can derive the symmetric pair of equations. Moreover, if h(t) is band-limited (no frequencies below f_L or above f_N), then there is a relationship between the continuous function h(t) and the discrete values H_k.

f_L = \frac{1}{T}
f_k = kf_L
H_k \equiv \sum_{j=0}^{N-1} h_j e^{2 \pi i jkf_L t_j}
h_j = \sum_{k=0}^{N-1} H_k e^{2 \pi i jkf_L t_j}
h(t) = \sum_{k=0}^{N-1} H_k e^{2 \pi i kf_L t} (when band limited)

Periodogram

Fourier transform give us the complex numbers and the square of the absolute value of these numbers represent the periodogram. This is the first form of numerical spectral analysis and is used to estimate spectral power. Even though the data points collected are at evenly spaced specific discrete time, it is possible to evaluate periodogram at any frequencies.

Fast Fourier Transform (FFT)

We can calculate the Fourier transform very efficiently by using FFT. It requires data at equally-spaced time points, and is most efficient when the number of points is an exact power of two. Interpolation is often used to produce the evenly-spaced data which may introduce additional bias and systematic eror. For real data consisting of N data points y_j, each taken at time t_j, the power spectrum outputs a set of N+1 data points. The first and the last data points are the same, and they represent the power at frequency zero. The second through to the N/2 + 1 data points represent the power at evenly-spaced frequencies up to the Nyquist frequency. The spectral power for a given frequency is distributed over several frequency bins, therefore an optimum determination of the power requires combining these information and proper investigation of leakage. FFT, generally, calculates the amplitude for a set of frequencies. N/2 complex amplitudes are calculated at N/2 different frequencies. Because, these may not be the true frequencies present in the record, we subtract the mean from the data and then pad it with zeros to overcome this challenge.

Aliasing

The time series consists of measurements made at a discrete, equally spaced, set of times on some phenomenon that is actually evolving continuously, or at least on a much finer time scale. For example, samples of Greenland Ice represent the temperature every 100 years, but if the sampling is not precisely spaced by a year, we will sometimes measure winter ice, and other times measure summer ice. Even without the existence of long-term variation in the temperature, fluctuations (jumping up and down) in the data can be noticed. So, there can be frequencies higher than the Nyquist frequency associated with the sampling interval. Thus a peak in the true spectrum at a frequncy beyond the Nyquist frequency may be strong enough to be seen(aliased) in the spectrum which may give the impression that a frequency is significant when it is not. Or, a peak may partly obscure another frequency of interest. This phenomenon is known as aliasing.

Tapering

Fourier transform is defined for a function on a finite interval and the function needs to be periodic. But with the real data set, this requirment is not met as the data end suddenly at t=0 and t=T and can have discontinuities. This discontinuity introduces distortions (known as Gibbs phenomenon) in fourier transform and generates false high frequency in the spectrum. Tapering (using data window) is used to reduce these artificial presence. The data y=f(t) is multiplied by a taper function g(t) which is a simple, slowly varying function, often going towards zero
near the edges. Some of the popular tapers are:

1. Sine taper g(t) = sin(\pi t/T)
2. Hanning (offset cosine) taper g(t) = \frac{1}{2}(1-cos(2\pi t/T))
3. Hamming taper g(t) = 0.54 - 0.46cos(2 \pi t/T)
4. Parzen or Bartlett (triangle) window g(t) = 1 - (t - T/2)/(T/2)
5. Welch (parabolic) window g(t) = 1 - (t - T/2)^2/(T/2)^2
6. Daniell (untapered or rectangular) window g(t) = 1

The frequency resolution in the spectrum of the tapered data is degraded. If the primary interest is the resolution of peaks, then the untapered periodogram is superior. However, tapering significantly reduces the sidelobes and also the bias applied to other nearby peaks by the sidelobes of a strong peak. Because, the taper functions are broad and slowly varying and their fourier transform FT(g) are narrow. The effect of tapering the data is to convolve the fourier transform of the data with the narrow fourier transform of the taper function which amounts to smoothing the spectral values.

FT(fg) = FT(f) * FT(g)

<

p style=”text-align:justify;”>
// Sine taper
t <- seq(0,1, by=0.01)
T <- 1
g <- sin(pi * t * T)
plot(t, g, t='l', col=1, ylab='g(t)')

// Hanning (offset cosine) taper
g2 <- 1/2 * (1-cos(2*pi*t/T))
lines(t, g2, t=’l’, col=2)

// Hamming
g3 <- 0.54 – 0.46 * cos(2*pi*t/T)
lines(t, g3, t=’l’, col=3)

// Parzen or Bartlett (triangle) window
g4 0.5, 1 – (t-T/2)/(T/2), 2*t)
lines(t, g4, t=’l’, col=4)

// Welch (parabolic) window
g5 <- 1 – (t-T/2)^2/(T/2)^2
lines(t, g5, t=’l’, col=5)

// Daniell window
g6 <- rep(0.5, length(t))
g6 <- ifelse(t <= 0.2, 0, g6)
g6 = 0.8, 0, g6)
lines(t, g6, t=’l’, col=6)

legnd = c(‘Sine’, ‘Hanning’, ‘Hamming’, ‘Bartlett’, ‘Welch’, ‘Daniell(20%)’)
legend(‘topleft’, legend=legnd ,col=1:6, lty=1, cex=0.75)

Multitaper Analysis

We apply taper or data window to reduce the side lobes of the spectral lines. Basically we want to minimize the leakage of power from the strong peaks to other frequencies. In multitaper method, several different tapers are applied to the data and the resulting powers then averaged. Each data taper is multiplied element-wise by the signal to provide a windowed trial from which one estimates the power at each component frequency. As each taper is pairwise orthogonal to all other tapers, the windowed signals provide statistically independent estimates of the underlying spectrum. The final spectrum is obtained by averaging over all the tapered spectra. D. Thomson chose the Slepian or discrete prolate spheroidal sequences as tapers since these vectors are mutually orthogonal and possess desirable spectral concentration properties. Multitaper method can suppress sidelobes but have higher resolution. If we use few tapers, the resolution won’t be degraded, but then sidelobe reduction won’t happen much. So, there is a trade-off which is often misunderstood.

Blackman-Tuckey Method

 

Blackman and Tuckey prescribed some techniques to analyze a continuous spectrum that was biased by the presence of sidelobes of strong peaks in the ordinary periodogram. Blackman-Tuckey(BT) method was developed before 1958, prior to the FFT(Fast Fourier Transform) method. A discrete fourier transform of N points would
require the calculation of N^2 sines and cosines. With the slower computer in the pre-FFT days, the calculation of fourier transform was thus expensive. BT method has reduced the time by reducing the size of the dataset by a factor of the lag in the autocorrelation calculation. BT method is based on a fundamental theorem of Fourier transform that the Fourier transform of a correlation is equal to the product of the Fourier transforms. The correlation of two functions g(t) and h(t) is given by the first equation below.

C(\tau)=g\otimes h= \int_{-\infty}^{\infty} g(t) h(t+\tau) d\tau
FT(g\otimes h) = FT(g) FT(h)

When g = h, it is called Wiener-Khintchine theorem. Here, P is the spectral power.

FT(g\otimes g) = |FT(g)|^2 = P

The algorithm in BT method calculates partial autocorrelation function, defined by

A_{BT}(\tau) = \int_{0}^{N/l} f(t+\tau)f(t) dt

Here, N is the length of the data set but we integrate only upto N/l. $l$ is associated with the lag. When l=3 (recommended by Blackman and Tuckey) is used, we say that “a lag of 1/3” is used. Now the fourier transform of partial autocorrelation function A_{BT} gives us the spectral power. Moreover, the symmetric property of the partial autocorrelation function (A(-\tau) = A(\tau)) saves half of the computation time.

FT(A_{BT}) = \int_{-\infty}^{\infty} e^{2\pi i ft } A_{BT}(\tau) d \tau = P_{BT}(f)
P_{BT}(f) = 2 \int_{0}^{\infty} cos(2\pi f) A_{BT}(\tau)

If l=1, then it is basically the full autocorrelation function A(\tau) and gives the same answer as the ordinary periodogram.

P(f) = 2 \int_{0}^{\infty} cos(2\pi f) A(\tau) = FT(A)

Because we are using partial correlation function instead of the full correlation, the spectral power function gets smoother. Therefore, we lose resolution in the BT method. However, it averages the sidelobes into the main peak, and thereby gives a better estimate of the true power. The smoothing in BT method is different from the smoothing when we use a taper. With a taper, the fourier transform is smoothed, where as with Blackman-Tukey, it is the spectral power which gets smoothed. A spectral amplitude that is rapidly varying will be averaged to zero with a taper. But in BT method, a rapidly varying amplitude does not necessarily average to zero, since the process of squaring can make the function positive over the region of smoothing. The tapering does not
average the sidelobes into the main peak. Because, shift in the time scale behaves like phase modulation. The sidelobes, when tapering is applied, will not have the same phase, and if averaged in amplitude, they can reduce the strength of the peaks. A major challenge in the BT method is that we will have to estimate the proper lag to use before doing all the calculations. Blackman and Tukey recommended starting with the value 1/3 for the lag.

Lomb-Scargle Periodogram

 

The classic periodogram requires evenly spaced data, but we frequently encounter with unevenly spaced data in paleoclimatic research. Lomb and Scargle showed that if the cosine and sine coefficients are normalized separately, then the classic periodogram can be used with unevenly spaced data. If we have a data set (t_k, y_k), we first calculate the mean and variance:

\bar{y} = \frac{1}{N} \sum_{k=1}^{N}y_k
\sigma^2 = \frac{1}{N-1} \sum_{k=1}^{N}[y_k - \bar{y}]^2

For every frequency f, a time constant \tau is defined by

\tau = \frac{ \sum_{k=1}^{N}sin(4\pi f t_k)}{\sum_{k=1}^{N}cos(4\pi f t_k)}

Then the Lomb-Scargle periodogram of the spectral power P(f) at frequency f is given by

$P(f) = \frac{1}{2\sigma^2}\frac{ \sum_{k=1}^{N}(y_k – \bar{y} ) [cos(2\pi f (t_k-\tau))]^2}{\sum_{k=1}^{N}cos^2(2\pi f (t_k-\tau))} +
\frac{ \sum_{k=1}^{N}(y_k – \bar{y} ) [sin(2\pi f (t_k-\tau))]^2}{\sum_{k=1}^{N}cos^2(2\pi f (t_k-\tau))}$

With evenly spaced data, two signals of different frequencies can have identical values which is known as Aliasing. That is why the classic periodogram is usually shown with the frequency range from 0 to 0.5, as the rest is a mirrored version. But with Lomb-Scargle periodogram, the aliasing effect can be significantly reduced.

Maximum Likelihood Analysis

In maximum likelihood method, we adjust the parameter of the model and ultimately find the parameters with which our model have the maximum probability/likelihood of generating the data. To estimate the spectral power, we first select a false alarm probability and calculate the normalized periodogram. We identify the maximum peak and test it against the false alarm probability. If the maximum peak meets the false alarm test, we determine the amplitude and phase of the sinusoid representing the peak. Then we subtract the sinusoidal curve from the data which also removes the annoying sidelobes associated with that peak. After peak removal, the variance in the total record is also reduced. Now, with the new subtracted data, we continue finding the other stronger peaks following the same procedure. We stop when a peak does not meet the false alarm test. We need to carefully choose the false alarm probability, as if it is too low, we can miss some significant peaks; it is too low, we can mislabel noise as peaks.

Maximum Entropy Method

It is assumed that the true power spectrum can be approximated by an equation which has a power series. This method finds the spectrum which is closest to white noise (has the maximum randomness or “entropy”) while still having an autocorrelation function that agrees with the measured values – in the range for which there are measured values. It yields narrower spectral lines. This method is suitable for relatively smooth spectra. With noisy input functions, if very high order is chosen, there may occur spurious peaks. This method should be used in conjuction with other conservative methods, like periodograms, to choose the correct model order and to avoid getting false peaks.

Cross Spectrum and Coherency

If a climate proxy a(t) is influenced or dominated by a driving force b(t), we can use cross spectrum to see if their amplitudes are similar. Cross spectrum is given by the product of the fourier transform.

C(f) = A(f) B^*(f)

where A is the Fourier transform of a and B is the complex conjugate of the fourier transform of b. If we want to know whether two signals are in phase with each other, regardless of amplitude, then we can take the cross spectrum, square it, and divide by the spectral powers of individual signals using the following equation for coherency. Coherency measures only the phase relationship and is not sensitive to amplitude which is a big drawback.

c(f) = \frac{|C(f)|^2}{P_a(f) P_b(f)}

Coherency is valuable if two signals that are varying in time, stay in phase over a band of frequencies instead of a single frequency. Therefore, a band of adjacent frequancies are used in the averaging process to compute coherency:

coherency(f) = \gamma^2(f) = \frac{|<C(f)>|^2}{<P_a(f)> <P_b(f)>}

Bispectra

In bispectra, coherency relationship between several frequencies are used. A bispectrum shows a peak whenever (1) three frequencies f_1, f_2 and f_3 are present in the data such that $f_1 + f_2 = f_3$ and (2) the phase relationship between the three frequencies is coherent for at least a short averaging time for a band near these frequencies. If the nonlinear processes in driving force (e.g. eccentricity or inclination of the orbit of earth) has coherent frequency triplets, then the response (i.e. climate) is likely to contain same frequency triplet. For example, \delta ^{18}O is driven by eccentricity, we should be able to find eccentricity triplet. Thus, by comparing the bispectrum plot of climate proxy with the bispectrum plot of the driving forces, we can verify the influences of driving forces.

## Monte Carlo Simulation of Background
Monte carlo simulation is extremely useful to answer the questions like whether the data is properly tuned or not, whether the timescale is incorrect, whether some spectral power is being leaked to adjacent frequencies, whether the peak has real structure and also to understand the structures near the base of the peak (a shoulder) in a spectral analysis. Generally monte carlo simulation is run multiple times. For each simulation, a real signal(sinusoidal wave) is generated, then random background signal is added, then the spectral power is calculated to look for shoulders. In this way, the frequency of the shoulder occurence can be measured and the randomness can be realized. It is important to create background that behaves similarly to the background in real data. Dissimilar background will cause false conclusion. We also need to estimate the statistical significance of the peaks very carefully.

(This article is a quick review of the fourier spectral analysis from the book “Ice Ages And Astronomical Causes- (Data, Spectral Analysis and Mechanics) by Richard A. Muller and Gordon J. MacDonald

How do Paleoclimatologists investigate about ancient Earth? What are different Climate Proxies and what are their significance?

To know and understand about ancient climate, different climate proxies are generally used. We can measure the concentration of greenhouse gases by using entrapped air in the Greenland and Antarctic glaciers which give us samples of the atmosphere back to about 420 Kyr. The glaciers in North America and on mountains in tropical Andes can be estimated from scour marks, moraines and erratic boulders.Forams are microscopic organisms whose life cycles depend on local temperature and whose fossils preserve samples of ancient material. Some planktic forams (short for foraminifera) represent a “proxy” for sea surface temperature as they indirectly inform us about the temperature. One of the most remarkable proxies is the ratio of oxygen isotopes in benthic(bottom dwelling) forams in ancient sediment, which reflect the total amount of ice that existed on the Earth at the time the sea beds were formed. A scientist needs to be careful in their analysis as most proxies are dependent on more than one aspect of climate. Now I will discuss the primary proxies which have been used to investigate paleoclimate. Many of the samples come from seafloor cores, cores from Greenland or Anatarctic ice. The cores are named V22-174, RC13-110, DSXP-806 etc. In the geologic community, various of these prefixes are used some of which are enlisted below:

  • V: Vema, a converted yacht operated by Lamont-Doherty Earth Observatory of Columbia university.
  • RC: Research vessel Robert Conrad.
  • DSDP: Deep Sea Drilling Project operated from 1968 to 1983 by the Scripps Institution of Oceanography at University of California, San Diego.
  • ODP: Ocean Drilling Program as an international collaboration.\newline
  • GRIP: European based GReenland Ice-core Project.
  • GISP2: US-based Greenland Ice Sheet Project #2.
  • Vostok: Russian station on the East Antarctic ice plateau.
  • MD: The research vessel Marion Dufresne, operated by the French.

1. Oxygen Isotopes

The pattern of oxygen isotopes is remarkably similar in sea floor records around the world and this universality feature is very attractive for a climate proxy. The ratio of oxygen istopes found in ice, trapped air, benthic/planktic forams is widely used as a climate proxy. Oxygen consists of three stable istopes: 99.759% is ^{16}O, 0.037% is ^{17}O, and 0.204% is ^{18}O. The variation in the fraction of ^{18}Olatex can be measured with high accuracy. The fractional change, shown by the following equation, basically means that how much difference of the ratio of \frac{^{18}O}{^{16}O}latex exists in perts per thousand in the sample compared to the reference.

\delta^{18}O = \left(\frac{\left(\frac{^{18}O}{^{16}O}\right)_{Sample}}{\left(\frac{^{18}O}{^{16}O}\right)_{Reference}} - 1\right) \times 1000

 

Oxygen isotope separation occurs because of the isotopic differences in vapour pressure and chemical reaction rates, which depends on temperature. Some of the most important geophysical processes that lead to changes in \delta^{18}O are:

  1. Evaporated water is ligher than the remaining liquid. Water containing ^{16}O has higher vapor pressure than water containing ^{18}O, so it evaporates quickly.
  2. Precipitated water molecules are heavier than those in the residual vapor. H_2^{18}O condenses more readily than H_2^{16}O, so as water vapour is carried across to Greenland or to central Anatarctica, the residual becomes lighter.
  3. Oceanic \delta^{18}O in non-uniformly distributed. It means that the changes in the pattern of winds that carry vapor and change the source will also change \delta^{18}O. At present, the difference in surface water is 1.5% from pole to equator.
  4.  Biological activity enriches the heavy isotope. The \delta^{18}O in the calcium carbonate of shells is 40% greater, on average, than in the water in which the organism lives.

 

The net result of these effects is that glacial ice is light, with \delta^{18}O typically lower than seawater. So, in glacial ice containing more ^{16}O, \delta^{18}O is negative, where as in surface water containing more ^{18}O, \delta^{18}O is positive. However, when large volumes of ice are stored in ace-gage glaciers, then there can be considearable depletion of the light isotopes in the oceans.
In 1964, Dansgaard and colleageus showed that measurements of isotopic enrichment in ocean water as a function of latitude yield the following approximate relationship between temperature T and \delta^{18}O:

\delta^{18}O \equiv 0.7 T - 13.6
However, there can be other factors in the change of \delta^{18}O. Therefore, if we go back to earlier when the temperature was lower, \delta^{18}O might not be lower which contradicts the above equation. When several measurements are made at the same latitude, the effect is argued to depend on the amount of precipitation and not on temperature.
Moreover, depending on the source, we will have to consider other issues. In planktic fossils, we might expect \delta^{18}O to reflect surface conditions, and therefore be sensitive to temperature and salinity conditions. In benthic forams, \delta^{18}O must be more sensitive to global ice, since there is little temperature variation on the sea floor. In other samples (e.g. ice, trapper air or calcite), \delta^{18}O may represent the temperature, not ice volutme.
Several attempts have been made to extract the underlying \delta^{18}O signal that is common in the records. SPECMAP stack (Imbrie et al., 1984) was a combination of five $\delta^{18}O$ records from five cores: V30-40, RC11-120, V280238 and DSDP502b.

2. Deuterium – Temperature Proxy

Hydrogen generally contains only one proton in its nucleus and is lighter with atomic weight 1. Deuterim (D or ^2H), on the other hand, is one of the heavy isotopes of hydrogen which contains one proton and one neutron in its nucleus and thus the atomic weight is 2. Bonds formed with deuterium tend to be much more stable than those with light hydrogen. The deuterated water is more sensitive to temperature than that of ^{18}O. We can clearly see it in the “fractionation factor” which describes the equilibrium between liquid and vapour. The fractionation factor is defined to be the ratio of D/H in a liquid to the ratio of D/H in a vapor that is in equilibrium with that liquid. The fractionation factor for HDO is approximately 1.08 and it varies more rapidly with temperature compared to ^{18}O. Therefore, the condensation of the deuterised form of heavy water (HDO) is significantly more sensitive to temperature variation than is the ^{18}O form (H_2^{18}O). Therefore, deuterim is considered as a temperature proxy. A temperature scale was devised fro the Vostok ice core by assuming the equation:

\Delta T = \frac{\Delta \delta D_{ICE} - \Delta \delta^{18}O_{SW}}{9}
where, the $\delta^{18}O_{SW}$ refers to the sea floor isotope record.

3. Carbon-13

Carbon on the earth has two stable istopes, ^{12}C with an bundance of 98.9% and ^{13}C with an abundance of 1.1%. The ratio of these two isotopes is described by the quantity \delta^{13}C and defined by the equation below. The reference value is often taken to be a sample known as the “Peedee belemnite” (PDB); its \delta^{13}C value is very close to that of mean sea water.

\delta^{13}C = \left(\frac{\left(\frac{^{13}C}{^{12}C}\right)_{Sample}}{\left(\frac{^{13}C}{^{12}C}\right)_{Reference}} - 1\right) \times 1000

The lighter isotope, ^{12}C, is easily absorbed into the organic tissue of plants, leading to negative values for ^{13}C = -20% to -25%. In regions in which photosynthesis is active, this removes typically 10-20% of the dissolved inorganic carbon in seawater, leading to ^{13}C enrichment in surrounding water. Because different regions of the world have different activity, there is geographic variation. Warm surface water has the highest \delta^{13}C, where as deep Pacific water has the lowest \delta^{13}C. Thus \delta^{13}C can be used as a tracer for oceanic currents.
In contrast, there is only small separation of carbon istotopes that takes place in the formation of caclcium carbonate shells. Thus the measurement of \delta^{13}C reflects the composition of the ocean water at the time and location in which the shell grew.
^{13}C is extremely important isotope for paleoclimate studies, because it responds to the presence of life. \delta^{13}C can record climate change. During glacial periods, biological activity was reduced by advancing glaciers and colder temperature, and light carbon was released into the atmosphere and eventually mixed into the oceans. \delta^{13}C from benthic (bottom dwelling) forams is typically 0.35% lower during glacials than during interglacials. In contrast, planktic forms don’t show such changes.

4. Vostok

The ice core from the Vostok site in Antarctica (Petit et al., 1999) located at 78^oS and 107^oE, covers the longest period of time of any ice record. It reached a depth of 3623 metres. A untuned but unbiased timescale was derived based on ice accumulation and glacial flow models. Many proxies of climate interest were measured in the Vostok core, including atmospheric methane, atmospheric oxygen, deuterium in the ice, dust content and sea salt. Atmospheric methane is produced by the biological activity of anaerobic bacteria and it’s existence in paleoclimate data is presumed to reflect the area of the earth covered by swamps and wetlands. The observed dust (strong 100 Kyr cycle) in the Vostok dust record is beleived to reflect reduction in vegetation during those periods and accompanying increase in wind-blown erosion. Then, the sodium concentration reflects the presence of sea spray aerosols blowing over the Vostok region.

5. Atmospheric \delta^{18}O and Dole Effect

The atmospheric oxygen has a \delta^{18}O of +23.5% compared to that of mean ocean sea water due to the removal of lighter isotope ^{16}O from the atmosphere by biological activity. The difference is called the “Dole Effect” and it is assumed to be time-independent.

6. \delta^{18}O / $CO_2$ Mystery

The difference between ocean and atmospheric $\delta^{18}O$ is due to the biological activity. However, carbon dioxide, even though driven by biological processes, doesn’t show similar spectra. The strong peaks in the oxygen signal forced by precession parameter is absent in the carbon dioxide record which is mysterious and still under investigation.

7. Other Sea Floor Records

7.1 Terrigenous component

The terrigenous component of sea floor sediment is the fraction which has possibly come from land, in the form of wind-blown dust. The most significant frequencies which have been found in the spectrum of detuned terrigenous component Site 721 are marked with the periods: 41, 24, 22 and 19 Kyr. These periods indicate that the signals were dominated by solar insolation.

7.2 Foram size: the coarse, or “sand”, component

In the sea floor core, the main component of the sand is frequently large forams. Therefore, the coarse component reflects an interesting change in the ecology of the oceans. A clear eccentricity signal was detected in a core that already showed a clear absence of eccentricity in the \delta^{18}O component.

7.3 Lysocline: carbonate isopleths

Pressure varies in different depths of the ocean and which consequently influences the solubility of the calcium carbonate. At a certain depth, the shells of fossil plankton begin to dissolve, and this boundary is called lysocline. It can be quantified by the percentage of calcium carbonate in the sediment, as a function of depth. One can plot the depth at which the 60% lysocline is found, as a function of age and this depends on the depth of the oceans at that age. The signal apeears to be dominated by a 100 Kyr cycle, as would be expected if the primary driving force were the depth of the ocean, determined by the amount of ice accumulated on land.

Featured Image Courtesy: https://katiecoleborn.wordpress.com/5-proxy-climate-records-what-are-they-and-how-do-they-work/
Main Reference: Ice Ages and Astronomical Causes (Data, Spectral Analysis and Mechanisms) by Richard A. Muller and Gordon J. MacDonald.

Amazon book link:

Diversity in the Past and Present

Well, a lot of us are fan of Khaleesi on Game of Thrones(GOT) and her fire-exhaling Dragons. It’s then a natural question 🙂 whether there existed creatures who could breathe or exhale fire. And science answers with a big “No” as there’s no evidence. Apparently the Bible thumpers and the wishful imaginative thinking of us! It feels exciting to see those dragons on GOT though!!

I am reading “The Ancestor’s Tale” by RD and within few hours, I have learnt so much about mesmerizing animals with their own features on the planet which survived, diverged, got trapped etc. The jurassic park movie did a great job on Jurassic period, but there’s so much more story in our ancestry and concestry.

I wish there were GOT like fictions based on the animals/creatures that exist, as most of us are not fans of animal planet like channels.

What is this obsession with us with non-existentence when there’s such beauty in the existence? 🤔

The Ancestor’s Tale by R.D.
https://www.goodreads.com/book/show/17977.The_Ancestor_s_Tale

How to Read Scientific papers/Text books?

Reading scientific textbook. Novels tell stories, but scientific text books try to inform with the fact, evidence, assumptions and logics.

Goal: Being informed and educated

1. Don’t always read front to back
2. ‎Read for Big Ideas + Connect Ideas
3. ‎Read for key details
4. ‎Take good enough notes. Read the book once but your notes multiple times.

Break the chronological order. Because too easy to get lost in the miniscule.
a) Read the questions at the end?
‎b) Read the final summary/conclusion
‎c) Headings and subdivisions of the chapter
‎d) Read the final intro
‎Finally have the view from front to back.

Chance

It’s 2018. The first book to try to formulate probability/chance was by Abraham de Moivre in exactly 300 years ago in 1718.

If you roll two dices, can you get two numbers that sum to 14? No. Coz the maximum number in each dice is 6. But can you get a number 11? Yes. That’s probable. How much probable? Can we measure? These are some basic probability that every science student learns in high school. But the underlying philosophy here is striking:
“Not everything is equally probable. Some are more likely than others. Some arguments, ideas are better than others, but based on some established ideas or agreed upon assumptions. How do we measure?”

What are your chances of waking up tomorrow? Is it 100%? What’s the chance that your relationship will last? Are these even measurable? What does it mean when you say I hope? Does that automatically translate into a highly probable future? Why don’t you get that million dollar lottery? Can you win some bucks at those Vegas gambling playing poker or blackjack? 😉 What’s your chance that you will win the game? It wasn’t that easy for mathematicians and statisticians to formulate what it really means by what’s probable,what’s expected in the world of uncertainty we live in and deal with. And in science, who doesn’t deal with some basic probability equations in their data.
.
https://www.goodreads.com/book/show/9081462-the-doctrine-of-chances

I absolutely admire the talk by Dr. Ana at Lawrence Livermore National Lab on “Understanding the world through statistics.” who introduced me to this book.

“The best thing about being a statistician is that you get to play around everyone’s backyard.” – The great statistician John Tukey.

May be I can make a new phrase for CS programmer too..haha.

“The best thing about being a computer programmer is that you get to make toys for everyone.” LoL.

Basketball statistics and why Stephen Curry attempts more 3 point shots than 2 point shots:

Fossil Record as a way to learn earth history

Who studies fossils? A Paleontologist. Why do paleontologists study fossils? Because fossil record brings information, provides clues, ideas about climatic changes of the planet, the evolution of geographical changes occurred on earth. For example, plant fossils and pollen fossils have been used to indicate climatic change of earth. To create geologic timescale, scientists have used fossils. We can see the fossil evolution during Paleogene period (over 66 Million years ago) in the image below. You can see the branches and subbranches in the image. The Subbotina trivialis (genus: Subbotina, species: trivialis) is highlighted in red.

Screen Shot 2017-07-26 at 9.19.52 PM
Evolution of Planktonic fossils during Paleogene period. The fossil image of Subbotina Trivialis has been shown in the rectangular yellow window. Subbotina Trivialis has found around 65.5 Million years ago.

Determining the age of fossil is very important and very challenging. Fossils are very often found in rocks and comparing one rock formation with another (relative dating), it’s possible to find a relative age for a fossil. Dating rocks involve calculating the rate of decay of radioactive element such as Carbon-14, Uranium-238, Potassium-40, Aluminium-26, Samarium-147, rubidiam-87, strontium-87. Fossilization is a rare event as there may not be any trace of an organism after its extinction. Therefore the record of an organism as the record of life in a fossil is something very significant to discover. The organism’s physical structure and subsequently deduced information such as it’s environment, diet, life-cycle can be obtained by studying fossils. Trace fossils, or fossilized marks left as a result of the activities of creatures such as trails, footprints, and burrows are also recorded and used as the source of information. From the fossil records throughout geologic time, scientists understood that the evolution of life is not a linear process. Sometimes the process is slow and sometimes it’s exponential. We also discovered that there might be periodicity in mass extinction by studying fossil records. Even the concept of plate tectonics was helped by fossil records. The more I am learning about fossils, the more exciting it’s becoming.