**Abstract:** Symmetry methods for differential equations have been of interest since Sophus Lie in the 19th century discovered methods to help solve differential equations. Since that time there has been lots of new developments in this theory. One of these developments is finding first integrals of differential equations. Having a complete set of first integrals allows us to solve a differential equation. In this presentation I will go into some of the theory of symmetry methods for differential equations. Then move onto the main topic which is finding first integrals of differential equations and then I will give some interesting results which link the Lie algebra of the symmetries with the Lie algebra made by the action on the first integrals.

**Abstract:** If I fix one end of a belt to a chair and give it one 360-degree twist along the length of it, I can’t undo this twist without letting the other end out of my hands. But if I instead give it two full twists along the length, I can straighten it without letting go! (An interactive demonstration will be provided so the audience can confirm this for themselves.) I will give the mathematical explanation for this phenomenon by discussing the geometry of rotations in three dimensions, and the topology of the rotation group SO(3). I’ll discuss the relevance of these ideas to theoretical physics, particularly the physics of elementary particles, and, time permitting, hopefully use it to explain part of the structure of the Periodic Table of elements.

Mixture models are undoubtedly one of the most popular approaches for model-based clustering, with mixture of Gaussians to dominate for their wieldy principles and edible nature. Nevertheless, due to the unbounded support range of a Gaussian distribution, issues occur when working with data that live within compact spaces, such as proportions (0 – 100%). To cluster this type of data, we propose a Dirichlet Process Beta mixture model that makes no assumptions about the number of clusters. Inference is performed by using a standard Variational Bayes (sVB) approach, upgraded to a more robust version that accounts for the poor initialization of the algorithm, known as Annealed Variation Bayes (AVB). Feature selection is also performed, i.e., the detection of important features that discriminate between clusters. This talk concentrates on the comparison of sVB and AVB performance when applied to synthetic data, as well as on the “per cluster” feature selection. Real applications on microarray DNA methylation data are currently in progress. This is joint work with Leonardo Bottolo, University of Cambridge.

]]>

**Abstract: **This week our new student Danielle will talk about her project, including the background to the project, the aims of the research, data collection methods, and the different data sets that will be used.

** Abstract: **Piecewise Deterministic Markov Processes have recently drawn the attention of the Markov Chain Monte Carlo community. The first reason for this is that, in general, one can simulate exactly the entire path of such a process. The second is that these processes are non-reversible, which sometimes leads to faster mixing. One of the processes used to construct Piecewise Deterministic Algorithms is the ZigZag process. This process moves linearly in the space $\mathbb{R}^d$ in specific directions for a random period of time, changing direction one coordinate at a time. An important question related to these samplers is the existence of a Central Limit Theorem which is closely connected to the property of Geometric Ergodicity. In this talk we will explain why the ZigZag Sampler is not Geometrically Ergodic when the target distribution has heavy tails and we will suggest how one could try to correct this by allowing the ZigZag process to speed up in certain regions.

**Abstract:** Recall, given a Lie algebra **g**, we can decompose **g** as a direct sum of **g^+, g^0 = h** and **g^-**. To **g^+**, we have the enveloping algebra U(**g^+**). The q-deformation of this algebra in the case of G_2 and its primitive ideals is going to be the main focus in this talk.

**Abstract:** Multivariate abundance data are abundances collected simultaneously for many ecological taxa (species, orders, functional groups…). This type of data is commonly collected in ecology and the environmental sciences. Multivariate data are common in the environmental sciences, occurring whenever we measure several response variables from each replicate sample. Questions like how does the species composition of a community vary across sites, are multivariate questions. Traditional methods used for analysis try to calculate a measure of similarity between each pair of samples, thus converting a multivariate dataset into a univariate one. However, this approach leads to low statistical power, and does not account for important properties of the multivariate data, such as the mean-variance relationship, or occurrence of rare species (many zeros).

The mvabund package uses model-based approaches by developing a novel set of hypothesis testing tools using the generalised linear models (GLM) framework. We use resampling-based hypothesis testing to make community-level and taxon-specific inferences about which factors or environmental variables are associated with the multivariate abundances. These inference tools take into account correlation between species, which is not possible using standard glm tools. Some more recent extensions of the mvabund package will be demonstrated as well.

]]>**Abstract:** Consider the (abstract) Klein Bottle – this has two one dimensional holes in it – one normal hole and one “weird” one. The commonly known picture of the Klein Bottle with a self-intersection circle also has two one dimensional holes in it, so all seems well. However, the map that maps one to the other actually “kills” the “weird” hole in some sense – thus it is a mystery where the new hole has come from. In this talk, we will explore this idea of “holes” not coming from the domain of a map, in fact they come from the multiple point spaces. We will look at a wonderful piece of machinery called the Image Computing Spectral Sequence, though I will try to put emphasis on pictures rather than the full details of what is going on, and show that in fact most cases have a strong link between holes in the image and holes in the multiple point spaces.

**Abstract:** LaTeX (pronounced “lay-tech”) is the norm for current mathematical writing; (almost) all mathematical journals require articles to be written in LaTeX. With its many automatic features, such as rendering lay-out and referencing, it is also a very practical word processor. Most importantly, it accommodates mathematical equations with great ease, which is a great improvement over the likes of Microsoft Office, and therefore of use for statisticians as well.

In Peter’s talk, he will introduce LaTeX from a practical viewpoint, a tentative outline is:

– Basics (writing text and equations, sections, cross-references, …)

– Document classes and packages (including beamer: “LaTeX PowerPoint”)

– Tricks (defining commands, using spacing, including subdocuments, …)

– Citing literature (BibTeX and finding bibliography entries)

– Other miscellany (writing in different languages, …)

This Friday, Gemma Hunt(from the outreach department of the university) and Kerry Newlyn(from our SMSAS) will talk with us about what is student ambassador and how to become a student ambassador.

Everyone is welcomed and questions are encouraged!

]]>