VMSTA Modern Stochastics: Theory and Applications 2351-60542351-60462351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA193 10.15559/21-VMSTA193 Research Article Interacting Brownian motions in infinite dimensions related to the origin of the spectrum of random matrices https://orcid.org/0000-0001-5236-0794 KawamotoYosukekawamoto@college.fdcnet.ac.jp Fukuoka Dental College, Fukuoka 814-0193, Japan 2021 101202200134118202181120218112021 © 2021 The Author(s). Published by VTeX2021 Open access article under the CC BY license.

The generalised sine random point field arises from the scaling limit at the origin of the eigenvalues of the generalised Gaussian ensembles. We solve an infinite-dimensional stochastic differential equation (ISDE) describing an infinite number of interacting Brownian particles which is reversible with respect to the generalised sine random point field. Moreover, finite particle approximation of the ISDE is shown, that is, a solution to the ISDE is approximated by solutions to finite-dimensional SDEs describing finite-particle systems related to the generalised Gaussian ensembles.

Interacting Brownian motions random matrices infinite-dimensional stochastic differential equations infinite particle systems 60K35 60B20 60J60 60H10 82B21 JSPS21K1381216H06338This work was supported by JSPS KAKENHI Grant Numbers 21K13812 and 16H06338.
Introduction

Consider the unitary ensembles of random matrices whose density is given by 1 Z | det M | 2 α e Tr M M d M for α > 1 2 , -\frac{1}{2},\]]]> where d M is the usual flat Lebesgue measure on the space of N × N Hermite matrices, and Z is the normalising constant. Hereafter, by abuse of notation, we use the same letter Z for normalising constants of several ensembles. Let x N = ( x 1 , , x N ) R N be the eigenvalues of an N × N matrix M. When M is distributed as (1.1), the probability density function of its eigenvalues is written as the following m G , α N , which is called generalised Gaussian ensembles : m G , α N ( d x N ) = 1 Z 1 i < j N | x i x j | 2 1 k N | x k | 2 α e x k 2 d x N . Then, m G , α N naturally gives a random point field μ G , α N in the sense that the labelled density of μ G , α N with respect to the Lebesgue measure is given by m G , α N .

Note that μ G , α N is a determinantal random point field. Let { p α n } n N be the monic orthogonal polynomials with respect to | x | 2 α e x 2 d x, and we set h n = R ( p α n ( x ) ) 2 | x | 2 α e x 2 d x. Let K G , α N : R × R R be the determinantal kernel defined as K G , α N ( x , y ) = | x | α | y | α e x 2 2 y 2 2 k = 0 N 1 p α k ( x ) p α k ( y ) h k . Then, for each n, the n-correlation function ρ G , α N , n of μ G , α N is given by ρ G , α N , n ( x 1 , , x n ) = det 1 i , j n [ K G , α N ( x i , x j ) ] . From the Cristoffel–Darboux formula [6, Proposition 5.1.3], we have K G , α N ( x , y ) = | x | α | y | α e x 2 2 y 2 2 h N 1 p α N ( x ) p α N 1 ( y ) p α N 1 ( x ) p α N ( y ) x y .

To focus on fluctuation of the generalised Gaussian ensembles around the origin, we take a scaling. We set K α N ( x , y ) = 1 N K G , α N ( x N , y N ) . Let μ α N be the determinantal random point field with kernel K α N . Clearly, the labelled density m α N of μ α N is written in the form m α N ( d x N ) = 1 Z 1 i < j N | x i x j | 2 1 k N | x k | 2 α e x k 2 N d x N . Then, the scaled kernel K α N has a nontrivial limit lim N K α N ( x , y ) = K α ( x , y ) compact uniformly . Here, the limit kernel K α ( x , y ) : R × R R is given by, for x y, K α ( x , y ) = x | x 1 y | J α + 1 2 ( 2 | x | ) J α 1 2 ( 2 | y | ) y | x y 1 | J α 1 2 ( 2 | x | ) J α + 1 2 ( 2 | y | ) 2 ( x y ) , and, for x = y, K α ( x , x ) = | x | 2 { J α + 1 2 2 ( 2 | x | ) + J α 1 2 2 ( 2 | x | ) J α + 3 2 ( 2 | x | ) J α 1 2 ( 2 | x | ) J α + 1 2 ( 2 | x | ) J α 3 2 ( 2 | x | ) } , where J ν denotes the Bessel function of the first kind of order ν. Remark that (1.6) yields weak convergence of random point fields. Let μ α be the determinantal random point field whose kernel is K α . More precisely, the n-correlation function ρ α n of μ α is given by ρ α n ( x 1 , , x n ) = det 1 i , j n [ K α ( x i , x j ) ] . Then, we obtain lim N μ α N = μ α weakly . The kernel K α is called the generalised sine kernel of order α : when α = 0, K α becomes the classical sine kernel sin ( x y ) / ( x y ). Note that, for x , y > 00$]]> or α N, the kernel (1.7) takes a simpler form K α ( x , y ) = x y J α + 1 2 ( 2 x ) J α 1 2 ( 2 y ) J α 1 2 ( 2 x ) J α + 1 2 ( 2 y ) 2 ( x y ) . As with other random matrix models, the universality of the limit kernel has been studied. Let V : R R be a function that increases fast enough at infinity, for instance, an even degree polynomial with a positive leading coefficient. Then, we consider the following m V , α N , which is a generalisation of (1.2): m V , α N ( d x N ) = 1 Z 1 i < j N | x i x j | 2 1 k N | x k | 2 α e V ( x k ) d x N . Then, a natural problem is the universality of random matrices. More precisely, it is of interest to show that, for a potential V of a quite wide class, local fluctuation at the origin for m V , α N yields the universal random point field μ α under suitable scaling. Akemann et al. showed the universality under the assumptions that α is a nonnegative integer, V is an even degree polynomial, and recurrence coefficients of associated orthogonal polynomials satisfy certain condition . The assumption that α is a nonnegative integer was excepted by Kanzieper and Freilikher . Moreover, the restriction that V is even degree polynomial was removed by Kuijlaars and Vanlessen. They showed universality for real analytic potential with mild assumptions . It is natural to try to study stochastic dynamics with infinitely many particles related to the universal random point field μ α . More specifically, our purpose is to construct reversible diffusion whose equilibrium measure is μ α , and describe its infinite-dimensional stochastic differential equation (ISDE). In order to derive such an ISDE, we consider a stochastic process related to μ α N , which is given by d X t N , i = d B t i + { α X t N , i X t N , i N + j i N 1 X t N , i X t N , j } d t , 1 i N , where ( B i ) i = 1 N is the N-dimensional Brownian motion. We derive (1.10) as follows. Consider a Dirichlet form E ( f , g ) = R N 1 2 i = 1 N f x i g x i m α N ( d x N ) on L 2 ( R N , m α N ( d x N ) ). Then, by using integration by parts and (1.5), the generator A N of E on L 2 ( R N , m α N ( d x N ) ) is given by A N = 1 2 Δ + i = 1 N { α x i x i N + j i N 1 x i x j } x i , which corresponds to (1.10). Taking N to infinity in (1.10), it is expected that the limit ISDE is given by d X t i = d B t i + { α X t i + lim r | X t i X t j | < r , j i 1 X t i X t j } d t , i N . Here, ( B i ) i N is the collection of independent copies of the one-dimensional Brownian motion. Clearly, (1.11) becomes the Dyson model in the case α = 0. Our purpose is to construct a solution ( X i ) i N to (1.11). It is important to point out that (1.11) is essentially different from the Bessel interacting ISDE, and deriving (1.11) from (1.10) is nontrivial for the long-range effect of logarithmic interaction. Honda and Osada derived the following Bessel interacting ISDE for α > 1 / 21/2$]]> : d X t i = d B t i + { α X t i + j N , j i 1 X t i X t j } d t , i N . Note that the (unlabelled) solution to (1.12) is reversible with respect to the Bessel random point field. Seemingly, our model (1.11) resembles (1.12). However, the sum of the drift term in (1.12) converges absolutely unlike (1.11). The Bessel random point field is a random point field on the positive line, and its one-correlation function, which stands for the mean density of particles, decreases with order x 1 / 2 as x ; therefore, the sum of the interaction term in (1.12) converges absolutely. In contrast, the generalised sine random point field is on R, and its one-correlation function is of constant order as | x | ; this implies that the drift term in (1.11) does not converge absolutely. Thus, we need careful computation to show the conditional convergence of the drift term as opposed to (1.12), and (1.11) is more difficult to study than (1.12) in this respect.

We refer to historical remarks on interacting Brownian motions with infinitely many particles. For a free potential Φ and an interaction potential Ψ, interacting Brownian motions in infinite-dimensions are given by the ISDE of the form d X t i = d B t i β 2 x Φ ( X t i ) d t β 2 j i x Ψ ( X t i , X t j ) d t , i N . Here, ( B i ) i N is the collection of independent copies of d-dimensional Brownian motion and β > 00$]]> is the inverse temperature. Lang derived general solutions to ISDE (1.13) under the condition Φ = 0 and Ψ C 0 3 ( R d ) . Fritz explicitly described the set of starting points for up to four dimensions , and Tanemura solved equations for hardcore Brownian balls . These results were achieved through Itô’s method, and required the coefficients to be smooth and have compact support. These conditions exclude physically interesting examples of long-range interaction potentials, such as the Lennard-Jones 6-12 potential and Riesz potentials. In particular, the logarithmic potential, which appears in random matrix theory, is also excluded. Osada established a Dirichlet form approach to solve ISDEs with a long-range potential, which can be applied to the logarithmic interacting particle systems . Osada and Tanemura also showed strong uniqueness of solutions . Furthermore, Tsai constructed nonequilibrium solutions to the Dyson model for 1 β < . We use the Dirichlet form approach to construct the unique strong solution to (1.11). More precisely, we construct a reversible diffusion on the configuration space with respect to μ α , and show that the process satisfies (1.11). To show ISDEs by the Dirichlet form approach, expression of the logarithmic derivatives is crucial, because the logarithmic derivatives correspond the drift terms of ISDEs. Bufetov, Dymov, and Osada introduced a method to compute the logarithmic derivatives for determinantal random point fields . Since μ α is determinantal, their result seems to be applicable to our case. In spite of this, we use finite-particle approximation method to find the logarithmic derivative of μ α , which was introduced in . This is because finite-particle approximation of the logarithmic derivative implies that of dynamics . More accurately, let ( X N , i ) 1 i N and ( X i ) i N be the solutions to (1.10) and (1.11), respectively. Then, provided suitable labelling, the approximation of the logarithmic derivatives imply that the first m-particles of ( X N , i ) 1 i N converge to that of ( X i ) i N weakly in the path space. We will show such convergence in Theorem 2.4. This is an advantage of our method. This paper is organised as follows. In Section 2, main results are shown. In Section 3, we prepare estimates of determinantal kernels. In Section 4 and Section 5, the logarithmic derivative and quasi-Gibbs property of μ α are shown, respectively. In Section 6, the strong uniqueness of our solution is proved. Set up and main results The strong uniqueness of solutions to ISDEs and dynamical convergence Let S = R. Let S be the configuration space over S given by S = { s = i δ s i ; s i S , s ( K ) < for any compact set K S } . We regard the zero measure as an element of S by convention. We equip S with the vague topology, which makes S to be a Polish space. A probability measure μ on ( S , B ( S ) ) is called a random point field on S. Let S s = { s S ; s ( { x } ) 1 for any x S } , S i = { s S ; s ( S ) = } , S s , i = S s S i . For each n N, a symmetric and locally integrable function ρ n : S n [ 0 , ) is called the n-correlation function of μ with respect to the Lebesgue measure if ρ n satisfies A 1 k 1 × × A m k m ρ n ( x 1 , , x n ) d x 1 d x n = S i = 1 m s ( A i ) ! ( s ( A i ) k i ) ! d μ ( s ) for any sequence of disjoint bounded sets A 1 , , A m B ( S ) and any sequence of natural numbers k 1 , , k m satisfying k 1 + + k m = n. We define an unlabelling map u : { k = 0 S k } S N S as u ( ( s i ) i ) = i δ s i . Here, S 0 = { } and u ( ) = o, where o is the zero measure. Furthermore, a measurable function l : S s , i S N is called a labelling map if u l is the identity map. Recall that μ α is the random point field whose correlation functions are given by (1.9). The next theorem is the main result of the present paper. Assume α > 1 / 21/2$]]>. Then, there exists a set S α satisfying μ α ( S α ) = 1 , S α S s , i , such that the following holds: for any s u 1 ( S α ), there exists an S -valued continuous process X = ( X i ) i N and an R N -valued Brownian motion B = ( B i ) i N satisfying d X t i = d B t i + { α X t i + lim r | X t i X t j | < r 1 X t i X t j } d t , X 0 = s , and P ( u ( X t ) S α , 0 t < ) = 1 .

The solutions obtained in Theorem 2.1 are just weak solutions. We next show the strong uniqueness of solutions to (2.2). We shall give the definitions of (IFC), (AC), (SIN), (NBJ), and (MF) in Section 6.

For μ α l 1 -a.s. s, there exists a strong solution ( X , B ) to (2.2)(2.3) satisfying (IFC), ( μ α -AC), (SIN), and (NBJ). Furthermore, the strong uniqueness of solutions to (2.2)(2.3) holds under the constraints of (MF), (IFC), ( μ α -AC), (SIN), and (NBJ).

Five assumptions (IFC), (AC), (SIN), (NBJ), and (MF) are required for the uniqueness of solutions in Theorem 2.2. It is not difficult to check these assumptions in practice. In fact, we showed that solutions to ISDEs satisfy (IFC) under mild assumptions, and furthermore, if a solution is associated with a Dirichlet form, we can check the five assumptions . Thus, as an application of the uniqueness of solutions in Theorem 2.2, we can show the uniqueness of Dirichlet forms associated with the generalised sine random point field , but we do not pursue here.

Let ( X , B ) be the unique solution obtained in Theorem 2.2 and we write X = ( X i ) i N . Let X N = ( X N , i ) 1 i N be the (unique) solution to (1.10). In analogy to the labelling map l, let l N : { s S ; s ( S ) = N } S N be a labelling map for N-particles. For labelling maps l = ( l i ) i N and l N = ( l N , i ) 1 i N , we write l m = ( l i ) 1 i m and l m N = ( l N , i ) 1 i m , respectively. Here and subsequently, W ( A ) stands for the set of all continuous paths w : [ 0 , ) A. Then, we see a weak convergence from X N to X in a path space.

Assume α > 1 / 21/2$]]>. Suppose that, for each m N, lim N μ α N ( l m N ) 1 = μ α ( l m ) 1 weakly . Assume that X 0 N = μ α N ( l N ) 1 and X 0 = μ α l 1 in distribution. Then, for each m N, lim N ( X N , 1 , X N , 2 , , X N , m ) = ( X 1 , X 2 , , X m ) weakly in W ( S m ). Quasi-Gibbs property and logarithmic derivative Theorem 2.1 is proved by making use of the Dirichlet form approach. In this framework, quasi-Gibbs property and logarithmic derivatives play important roles. We first introduce the concept of quasi-Gibbs measures. We set S r = { | x | r } and S r m = { s S ; s ( S r ) = m }. Let Λ r be the Poisson random point field whose intensity is the Lebesgue measure on S r and set Λ r m = Λ r ( · S r m ). Let π r , π r c : S S be such that π r ( s ) = s ( · S r ) and π r c ( s ) = s ( · S r c ), respectively. ([<xref ref-type="bibr" rid="j_vmsta193_ref_026">26</xref>, <xref ref-type="bibr" rid="j_vmsta193_ref_027">27</xref>]). A random point field μ is said to be a ( Φ , Ψ )-quasi-Gibbs measure if, for each r , m N and μ-a.s. s, the regular conditional probability μ r , s m : = μ ( π r ( x ) · | π r c ( x ) = π r c ( s ) , x ( S r ) = m ) satisfies c 1 1 e H r ( x ) Λ r m ( d x ) μ r , s m ( d x ) c 1 e H r ( x ) Λ r m ( d x ) . Here, c 1 = c 1 ( r , m , s ) is a positive constant depending on r, m, and s, and H r is the Hamiltonian on S r defined as H r ( x ) = x i S r Φ ( x i ) + x j , x k S r Ψ ( x j , x k ) for x = i δ x i . Moreover, for two measures μ , ν on a σ-field F, we write μ ν if μ ( A ) ν ( A ) for all A F. For any α > 1 / 21/2$]]>, μ α is a ( 2 α log | x | , 2 log | x y | )-quasi-Gibbs measure.

From Theorem 2.6, a reversible diffusion with respect to μ α is constructed by the Dirichlet form theory. Let E μ α and D μ α be as in (2.8) and (2.9) with k = 0 and μ = μ α , respectively. See, e.g.,  for notions of the general theory of Dirichlet forms such as locality, quasi-regularity, associated diffusions, and capacity.

Assume α > 1 / 21/2$]]>. Then, ( E μ α , D μ α ) is closable on L 2 ( μ α ), and its closure ( E μ α , D μ α ) is a local and quasi-regular Dirichlet form. Thus, there exists an S-valued diffusion ( X , { P s } s S ) associated with ( E μ α , D μ α ). This corollary immediately follows from Theorem 2.6 with [26, Lemma 2.1, Corollary 2.1]. □ We remark that each particle does not hit the origin. Let Cap μ α be the one-capacity with respect to ( E μ α , D μ α , L 2 ( μ α ) ). Assume α > 1 / 21/2$]]>. Let A = { s ; s ( { 0 } ) 1 }. Then, Cap μ α ( A ) = 0 .

From (1.8), we see that, for each r, there exists a positive constant c 2 such that ρ α 1 ( x ) c 2 | x | 2 α for x S r . Once we have obtained (2.5), the lemma is proved by the same argument as in [9, Lemma B.1].  □

We next prepare some quantities to introduce the logarithmic derivative, which is a crucial quantity for the representation of ISDE. For a random point field μ and x = ( x 1 , , x k ) S, for some k N, we call μ x the (reduced) Palm measure of μ conditioned at x if μ x is the regular conditional probability defined as μ x = μ ( · i = 1 k δ x i | s ( { x i } ) 1 for i = 1 , , k ) . Recall that ρ k denotes the k-correlation function of μ. A Radon measure μ [ k ] on S k × S is called the k-Campbell measure of μ if μ [ k ] is given by μ [ k ] ( d x d s ) = ρ k ( x ) μ x ( d s ) d x . Let L loc p ( S × S , μ [ 1 ] ) = r = 1 L p ( S r × S , μ [ 1 ] ).

A function f : S R is said to be local if f is σ [ π r ]-measurable for some r N. Moreover, we say that f is smooth if f ˇ is smooth, where f ˇ is the permutation invariant function in ( s i ) i such that f ( s ) = f ˇ ( ( s i ) i ) for s = i δ s i . Let D be the set of all local smooth functions on S, and we write D , b = { f D ; f is bounded }. We set C 0 ( R { 0 } ) D , b = { i = 1 m f i ( x ) g i ( y ) ; f i C 0 ( R { 0 } ) , g i D , b , m N } .

([<xref ref-type="bibr" rid="j_vmsta193_ref_025">25</xref>]).

An R-valued function d μ L loc 1 ( S × S , μ [ 1 ] ) is called the logarithmic derivative of μ if, for all φ C 0 ( R { 0 } ) D , b , S × S d μ ( x , y ) φ ( x , y ) μ [ 1 ] ( d x d y ) = S × S x φ ( x , y ) μ [ 1 ] ( d x d y ) .

The space of test functions is not C 0 ( R ) D , b , but C 0 ( R { 0 } ) D , b , since particles never hit the origin for the case μ = μ α with α > 1 / 21/2$]]> from Lemma 2.8. The next claim is the key theorem in the present paper. For α > 1 / 21/2$]]>, the logarithmic derivative of μ α exists in L loc p ( S × S , μ α [ 1 ] ) for 1 p < 2, and it is given by d μ α ( x , y ) = 2 α x + lim r | x y j | < r 2 x y j for y = j δ y j . Here, the limit in the right hand side of (2.6) is taken in L loc p ( S × S , μ α [ 1 ] ).

The general theory for ISDEs

This subsection is devoted to the general framework – the Dirichlet form approach for infinite particle systems. We first introduce Dirichlet forms describing k-labelled processes.

For f , g D , we define D [ f , g ] : S R as D [ f , g ] ( s ) = 1 2 i f ˇ ( s ) s i g ˇ ( s ) s i . Here s = i δ s i and s = ( s i ) i . Remark that D [ f , g ] is well defined, because the right hand side of (2.7) depends only on s. For f , g C 0 ( S k ) D , let k [ f , g ] be the function on S k × S defined as k [ f , g ] ( x , s ) = 1 2 i = 1 k f ( x , s ) x i g ( x , s ) x i , where x = ( x i ) i = 1 k S k . For f , g C 0 ( S k ) D , we set D k [ f , g ] ( x , s ) = k [ f , g ] ( x , s ) + D [ f ( x , · ) , g ( x , · ) ] ( s ) . Then, we set the bilinear form ( E μ [ k ] , D μ [ k ] ) as E μ [ k ] ( f , g ) = S k × S D k [ f , g ] d μ [ k ] , D μ [ k ] = { f ( C 0 ( S k ) D ) L 2 ( S k × S , μ [ k ] ) ; E μ [ k ] ( f , f ) < } . When k = 0, we interpret D 0 = D and μ [ 0 ] = μ. For simplicity, we write L 2 ( μ [ k ] ) = L 2 ( S k × S , μ [ k ] ).

Let Erf ( t ) = ( 1 / 2 π ) t e x 2 / 2 d x be the complementary error function. Suppose that a random point field μ satisfies the following (A1)(A5).

For each n, there exists n-correlation functions ρ n of μ and ρ n is locally bounded.

There exists the logarithmic derivative d μ .

For each k { 0 } N, ( E μ [ k ] , D μ [ k ] ) is closable on L 2 ( μ [ k ] ).

Cap μ ( S s c ) = 0.

There exists T > 00$]]> such that, for each R > 00$]]>, lim inf r { | x | r + R ρ 1 ( x ) d x } Erf ( r ( r + R ) T ) = 0 .

From (A3), let ( E μ [ k ] , D μ [ k ] ) be the closure of ( E μ [ k ] , D μ [ k ] ) on L 2 ( μ [ k ] ). It is known that ( E μ [ k ] , D μ [ k ] ) is a quasi-regular Dirichlet form . Then, Cap μ in (A4) denotes the one-capacity associated with ( E μ , D μ , L 2 ( μ ) ). Assumption (A4) means that particles never collide. Furthermore, (A5) implies that each tagged particle does not explode .

Recall that W ( A ) = C ( [ 0 , ) , A ). Each w W ( S s , i ) can be written as w t = i δ w t i , where w i is an S-valued continuous path defined on an interval I i of the form [ 0 , b i ) or ( a i , b i ), where 0 a i < b i . Taking maximal intervals of this form, we can choose [ 0 , b i ) and ( a i , b i ) uniquely up to labelling. We remark that lim t a i | w t i | = and lim t b i | w t i | = for b i < for all i. We call w i a tagged path of w and I i the defining interval of w i . Let W NE ( S s , i ) = { w W ( S s , i ) ; I i = [ 0 , ) for all i } . It is said that the tagged path w i of w does not explode if b i = , and does not enter if I i = [ 0 , b i ), where b i is the right end of the defining interval of w i . Thus, W NE ( S s , i ) is the set consisting of all nonexploding and nonentering paths in W ( S s , i ).

We can naturally lift each w = { i δ w t i } t [ 0 , ) W NE ( S s , i ) to the labelled path w = ( w i ) i N = { w t } t [ 0 , ) = { ( w t i ) i N } t [ 0 , ) W ( S N ) using a label l = ( l i ) i N . Indeed, for each w W NE ( S s , i ), we can construct the labelled process w = { ( w t i ) i N } t [ 0 , ) such that w 0 = l ( w 0 ), because each tagged particle can carry the initial label i from the noncollision and nonexplosion properties of w. We write this correspondence as l path ( w ) = ( l path i ( w ) ) i N .

From (A1) and (A3), there exists a diffusion X on S associated with ( E μ , D μ , L 2 ( μ ) ) . Then, we set X = l path ( X ). The labelled process X gives a weak solution to an ISDE as follows. ([<xref ref-type="bibr" rid="j_vmsta193_ref_025">25</xref>, Theorem 26]).

Assume (A1)(A5). Then, there exists a set H satisfying μ ( H ) = 1 , H S s , i , such that the following holds: for all s u 1 ( H ), there exists an S N -valued Brownian motion B = ( B i ) i N such that ( X , B ) is a weak solution to d X t i = d B t i + 1 2 d μ ( X t i , X t i ) d t , X 0 = s , where X t i = j N , j i δ X t j . Moreover, it holds that P ( X t H , 0 t < ) = 1 .

Derivation of Theorem <xref rid="j_vmsta193_stat_001">2.1</xref> from Theorem <xref rid="j_vmsta193_stat_006">2.6</xref> and Theorem <xref rid="j_vmsta193_stat_013">2.11</xref>

We shall apply Lemma 2.12 for μ α to show Theorem 2.1.

For any α > 1 / 21/2$]]>, μ α satisfies (A1), (A4) and (A5). It is easy to see that K α is bounded. Then, we have (A1) and (A5). Furthermore, because K α is locally Lipschitz continuous, (A4) follows from [23, Theorem 2.1]. □ We derive Theorem 2.1 from Theorem 2.6 and Theorem 2.11. Assumption (A2) holds from Theorem 2.11. Furthermore, we obtain (A3) from Theorem 2.6. Actually, (A3) for k = 0 follows from the quasi-Gibbs property of μ α [26, Lemma 3.6]. For general k 1, (A3) also follows from the quasi-Gibbs property of μ α in a similar manner. Combining these with Lemma 2.13, we have checked (A1)(A5) for μ α . Therefore, Lemma 2.12 completes the proof of Theorem 2.1. □ With the above argument, Theorem 2.1 is reduced to Theorem 2.6 and Theorem 2.11. The next section is devoted to preparing key tools to show Theorem 2.6 and Theorem 2.11. Key tools for the proofs of main results Determinantal kernels For ν > 1-1$]]>, let { L ν n ( x ) } n N be the classical Laguerre polynomials, that is, L ν n ( x ) = e x x ν n ! d n d x n ( e x x n + ν ) . For simplicity, we write λ ν n ( x ) = | x | ν e x 2 2 L ν n ( x 2 ) .

The orthogonal polynomials { p α n } n N , which appear in (1.3), are represented by the Laguerre polynomials. Accordingly, K G , α N can be rewritten in terms of { λ ν n } n N as follows.

([<xref ref-type="bibr" rid="j_vmsta193_ref_020">20</xref>]).

For any k { 0 } N, K G , α N ( x , y ) = Γ ( k + 1 ) Γ ( k + α + 1 2 ) x | x 1 y | λ α + 1 2 k ( x ) λ α 1 2 k ( y ) y | x y 1 | λ α 1 2 k ( x ) λ α + 1 2 k ( y ) x y for N = 2 k + 1 , Γ ( k + 1 ) Γ ( k + α + 1 2 ) x | x 1 y | λ α + 1 2 k 1 ( x ) λ α 1 2 k ( y ) y | x y 1 | λ α 1 2 k ( x ) λ α + 1 2 k 1 ( y ) x y for N = 2 k .

This lemma makes an expression of the one-correlation function of μ G , α N . We note the symmetry ρ G , α N , 1 ( x ) = ρ G , α N , 1 ( x ) and ρ α 1 ( x ) = ρ α 1 ( x ). Then, for simplicity, we consider the one-correlation functions for nonnegative x in Lemma 3.2 and Lemma 3.3.

The following holds for x 0: when N = 2 k + 1, ρ G , α N , 1 ( x ) = k 1 2 α ( 2 x { λ α + 1 2 k ( x ) λ α + 1 2 k 1 ( x ) λ α 1 2 k ( x ) λ α + 3 2 k 1 ( x ) } + λ α + 1 2 k ( x ) λ α 1 2 k ( x ) ) × ( 1 + O ( N 1 ) ) , and when N = 2 k, ρ G , α N , 1 ( x ) = k 1 2 α ( 2 x { ( λ α + 1 2 k 1 ( x ) ) 2 λ α 1 2 k ( x ) λ α + 3 2 k 2 ( x ) } + λ α + 1 2 k 1 ( x ) λ α 1 2 k ( x ) ) × ( 1 + O ( N 1 ) ) as N . Here, O ( N 1 ) terms are independent of x R.

Noting that ( d / d x ) L ν k ( x ) = L ν + 1 k 1 ( x ), we have the equality ( d / d x ) L ν k ( x 2 ) = 2 x L ν + 1 k 1 ( x 2 ). Combining this with Lemma 3.1 and the fact that Γ ( k + a ) Γ ( k + b ) = k a b ( 1 + O ( k 1 ) ) as k , we have the desired result.  □

For x > 00$]]>, we have ρ α 1 ( x ) > 00$]]>.

From (1.8) and the fact that J ν + 1 ( x ) = 2 ν x 1 J ν ( x ) J ν 1 ( x ), we see ρ α 1 ( x ) = x { ( J α + 1 2 ( 2 x ) α 2 x J α 1 2 ( 2 x ) ) 2 + ( 1 α 2 2 x 2 ) J α 1 2 2 ( 2 x ) } . Combining this with the fact that J α + 1 2 ( 2 x ) and J α 1 2 ( 2 x ) have no common zero points, we see ρ α 1 ( x ) > 00$]]> for 2 x > α\alpha$]]>. Recall that J ν 1 J ν + 1 = 2 J ν . Then, (1.8) yields that ρ α 1 ( x ) = x { J α + 1 2 ( 2 x ) J α 1 2 ( 2 x ) J α 1 2 ( 2 x ) J α + 1 2 ( 2 x ) } = 2 α 0 2 x J α + 1 2 ( t ) J α 1 2 ( t ) t d t = 2 α 0 2 x J α + 1 2 ( t ) { J α + 3 2 ( t ) + 2 J α + 1 2 ( t ) } t d t . We remark that the second equality is shown by differentiation of both sides with differential equation x 2 J ν ( x ) + x J ν ( x ) + ( x 2 ν 2 ) J ν ( x ) = 0. Because J ν ( z ) > 00$]]> and J ν ( z ) > 00$]]> for z ( 0 , ν ) (see [21, p. 246]), we have ρ α 1 ( x ) > 00$]]> for 0 < 2 x α. □ Asymptotic behaviour of the Laguerre polynomials Because of Lemma 3.1, asymptotic analysis of the correlation functions boils down to that of Laguerre polynomials. We quote the Plancherel–Rotach type asymptotic results for the Laguerre polynomials by Erdélyi. We introduce the quantity A ν n = n + ν + 1 2 . ([<xref ref-type="bibr" rid="j_vmsta193_ref_005">5</xref>]). Let ν > 00$]]>. Then, we have the following asymptotics:

For any 0 < ρ < π / 2, L ν n ( 4 A ν n cos 2 ρ ) = ( 1 ) n e 2 A ν n cos 2 ρ ( 2 cos ρ ) ν ( π n sin 2 ρ ) 1 / 2 × { cos A ν n ( sin 2 ρ 2 ρ ) + π 4 + O ( 1 n ρ 3 ) + O ( 1 n ( π / 2 ρ ) ) } as n ρ 3 and n ( π / 2 ρ ) .

For any 0 < ρ, L ν n ( 4 A ν n cosh 2 ρ ) = ( 1 ) n e A ν n ( 1 + 2 ρ + e 2 ρ ) ( 2 cosh ρ ) ν + 1 ( 2 π A ν n tanh ρ ) 1 / 2 { 1 + O ( 1 + ρ 3 n ρ 3 ) } as n ρ 3 .

For any x satisfying x 4 A ν n = o ( n 3 5 ), L ν n ( x ) = ( 1 ) n e x / 2 2 ν + 1 / 3 n 1 / 3 { Ai ( x 4 A ν n ( 16 A ν n ) 1 / 3 ) + Ai ˜ ( x 4 A ν n ( 16 A ν n ) 1 / 3 ) [ O ( 1 n ) + O ( x 4 A ν n n ) + O ( ( x 4 A ν n ) 5 / 2 n 3 / 2 ) ] } as n . Here, Ai is the Airy function of the first kind, and Ai ˜ is defined as Ai ˜ ( x ) = Ai ( x ) for x 0 , { | Ai ( x ) | 2 + | Bi ( x ) | 2 } 1 2 for x < 0 , where Bi is the Airy function of the second kind.

Remark that several O-terms in Lemma 3.4 contain two variables. For example, f ( n , ρ ) O ( ( n ρ 3 ) 1 ) as n ρ 3 means that there exist positive constants C and L such that | f ( n , ρ ) | C ( n ρ 3 ) 1 holds for any n , ρ satisfying n ρ 3 > LL$]]>. Other terms have similar meaning. With a computation similar to that used in , Lemma 3.4 yields the following asymptotics. For ν > 00$]]> and m { 2 , 1 , 0 }, the following hold:

For any 0 < ρ < π / 2, we have λ ν n + m ( 2 n 1 2 cos ρ ) = ( 1 ) n + m n ν 1 2 ( π sin 2 ρ ) 1 / 2 { cos n ( sin 2 ρ 2 ρ ) 2 ρ A ν m + π 4 + O ( 1 n ρ 3 ) + O ( 1 n ( π / 2 ρ ) ) } as n ρ 3 and n ( π / 2 ρ ) .

For any 0 < ρ, we have λ ν n + m ( 2 n 1 2 cosh ρ ) = ( 1 ) n + m n ν 1 2 exp { n ( 2 ρ sinh 2 ρ ) + 2 ρ A ν m } 2 3 / 2 cosh ρ ( π tanh ρ ) 1 / 2 × { 1 + O ( 1 + ρ 3 n ρ 3 ) } as n ρ 3 .

For n large enough, there exists θ = θ ( n , ρ ) such that cos ρ = ( A ν n + m n ) 1 2 cos ( ρ + θ ) , where A ν n + m = n + m + ( ν + 1 ) / 2. Then, it is easy to see that θ = O ( n 1 ) sin ρ , cos ( ρ + θ ) = { 1 A ν m 2 n + O ( n 2 ) } cos ρ as n . Combining these with cos ( ρ + θ ) = cos ρ θ sin ρ + O ( θ 2 ) as θ 0, we obtain θ = A ν m 2 n cos ρ sin ρ + O ( n 2 ) sin 3 ρ as n . Furthermore, the Taylor expansion with (3.7) yields sin 2 ( ρ + θ ) 2 ( ρ + θ ) = sin 2 ρ 2 ρ 4 θ sin 2 ρ + O ( n 2 ) sin 2 ρ as n . Then, by straightforward computation with (3.10) and (3.9), we have A ν n + m { sin 2 ( ρ + θ ) 2 ( ρ + θ ) } = n ( sin 2 ρ 2 ρ ) 2 ρ A ν m + O ( n 1 ) sin 2 ρ as n .

From (3.1) and (3.6), we see that L ν n + m ( 4 n cos 2 ρ ) = L ν n + m ( 4 A ν n + m cos 2 ( ρ + θ ) ) = ( 1 ) n + m e 2 n cos 2 ρ ( 2 cos ρ ) ν ( π n sin 2 ( ρ + θ ) ) 1 / 2 × { cos A ν n + m { sin 2 ( ρ + θ ) 2 ( ρ + θ ) } + π 4 + O ( 1 n ρ 3 ) + O ( 1 n ( π / 2 ρ ) ) } as n ρ 3 and n ( π / 2 ρ ) . Therefore, substituting (3.11) for (3.12), we finally derive (3.4).

To show (ii), let θ = θ ( n , ρ ) be such that cosh ρ = ( A ν n + m n ) 1 2 cosh ( ρ + θ ) . From this, we have that θ = O ( n 1 ) sinh ρ , cosh ( ρ + θ ) = { 1 A ν m 2 n + O ( n 2 ) } cosh ρ as n . Combining this with cosh ( ρ + θ ) = cosh ρ + θ sinh ρ + O ( θ 2 ) as θ 0, we obtain θ = A ν m 2 n cosh ρ sinh ρ + O ( n 2 ) sinh 3 ρ as n . By the Taylor expansion, (3.16) and (3.14) yield A ν n + m { 2 θ + e 2 ρ ( e 2 θ 1 ) } = A ν n + m { 2 θ ( 1 e 2 ρ ) + O ( n 2 ) sinh 2 ρ } = A ν m ( 1 + e 2 ρ ) + O ( n 1 ) sinh 2 ρ as n . Thereby, combining this with the fact that 1 + e 2 ρ = 2 cosh 2 ρ sinh 2 ρ, we get A ν n + m ( 1 + 2 ( ρ + θ ) + e 2 ( ρ + θ ) ) = A ν n + m ( 1 + 2 ρ + e 2 ρ ) + A ν n + m ( 2 θ + e 2 ρ ( e 2 θ 1 ) ) = 2 n cosh 2 ρ + n ( 2 ρ sinh 2 ρ ) + 2 ρ A ν m + O ( n 1 ) sinh 2 ρ as n .

From (3.2) and (3.13), we have L ν n + m ( 4 n cosh 2 ρ ) = L ν n + m ( 4 A ν n + m cosh 2 ( ρ + θ ) ) = ( 1 ) n + m exp { A ν n + m ( 1 + 2 ( ρ + θ ) + e 2 ( ρ + θ ) ) } ( 2 cosh ( ρ + θ ) ) ν + 1 ( 2 π A ν n + m tanh ( ρ + θ ) ) 1 / 2 { 1 + O ( 1 + ρ 3 n ρ 3 ) } as n ρ 3 . Finally, substituting (3.17) for (3.18), we obtain (3.5).  □

Asymptotic estimates of determinantal kernels

Using the results in Section 3.2, we derive asymptotic estimates of determinantal kernels. For convenience, we set M α N ( x , y ) = 1 N K G , α N ( N x , N y ) . Note that (1.4) implies M α N ( x , y ) = K α N ( N x , N y ) .

Fix a constant γ satisfying 1 2 < γ < 2 5 , and we set U 1 N = [ 2 + N γ , 0 ) ( 0 , 2 N γ ] , U 2 N = ( , 2 N γ ] [ 2 + N γ , ) , T N = [ 2 N γ , 2 + N γ ] [ 2 N γ , 2 + N γ ] . Remark that R { 0 } = U 1 N U 2 N T N . Furthermore, we make the definition U N = U 1 N U 2 N . Hereafter, we always suppose α > 1 / 21/2$]]>. For any x U 1 N , we have M α N ( x , x ) = 2 x 2 π + O ( 1 N 1 + 2 γ ) + O ( 1 N | x | ) as N and N | x | . There exist positive constants c 3 and L such that, for any x U 2 N and N N satisfying N ( x 2 2 ) 2 > LL$]]>, we have M α N ( x , x ) c 3 N ( x 2 2 ) 2 .

We prove for the case N = 2 k for k N; the case of odd N can be proved in the same way. It is sufficient to see for x > 00$]]>. Put N x = 2 k cos ρ. Considering | 2 x 2 | N γ for x U 1 N , we see ρ 1 = O ( N γ / 2 ) as N . Set ρ ˜ = sin 2 ρ 2 ρ for simplicity, and (3.4) becomes λ ν k + m ( N x ) = ( 1 ) k + m k ν 1 2 ( π sin 2 ρ ) 1 / 2 { cos ( k ρ ˜ 2 ρ A ν m + π 4 ) + O ( 1 N 1 + 3 γ / 2 ) + O ( 1 N | x | ) } as N and N | x | . Then, this yields 2 N x { ( λ α + 1 2 k 1 ( N x ) ) 2 λ α 1 2 k ( N x ) λ α + 3 2 k 2 ( N x ) } = 2 k α π { sin ρ + O ( 1 N 1 + 2 γ ) + O ( 1 N | x | ) } as N and N | x | . Here, we used the fact that cos 2 A cos ( A + B ) cos ( A B ) = sin 2 B with A = k ρ ˜ ( α 1 2 ) ρ + π 4 and B = ρ. Furthermore, we have λ α + 1 2 k 1 ( N x ) λ α 1 2 k ( N x ) = k α 1 π sin 2 ρ { O ( 1 ) + O ( 1 N 1 + 3 γ / 2 ) + O ( 1 N | x | ) } as N and N | x | . Then, (3.23) and (3.24) with Lemma 3.2 yield (3.21). Next, we shall show (ii). Using (3.5) for N x = 2 k cosh ρ, we get 2 N x { ( λ α + 1 2 k 1 ( N x ) ) 2 λ α 1 2 k ( N x ) λ α + 3 2 k 2 ( N x ) } = k α e 2 k ( 2 ρ sinh 2 ρ ) + ( 2 α 1 ) ρ 2 π sinh ρ × O ( 1 + ρ 3 N ρ 3 ) and λ α + 1 2 k 1 ( N x ) λ α 1 2 k ( N x ) = k α 1 e 2 k ( 2 ρ sinh 2 ρ ) + 2 α ρ 2 3 π sinh ρ cosh ρ { 1 + O ( 1 + ρ 3 N ρ 3 ) } as N ρ 3 . Then, combining these with Lemma 3.2, we obtain (3.22). □ There exist a positive constant c 4 and f ( N , x ) O ( ( N | x | ) 1 ) as N | x | such that, for all N N and x U N , M α N ( x , x ) c 4 ( 1 + f ( N , x ) ) There exist a positive constant c 5 such that, for all N N and x T N , M α N ( x , x ) c 5 N 1 3 . There exists a positive constant c 6 and f ( N , x ) O ( ( N | x | ) 1 ) as N | x | such that, for all N N and x , y U N , | M α N ( x , y ) | c 6 ( 1 + f ( N , x ) ) ( 1 + f ( N , y ) ) N | x y | | 2 x 2 | 1 / 4 | 2 y 2 | 1 / 4 . Inequality (3.25) follows from (3.21) and (3.22). Let N = 2 k (the case N = 2 k + 1 is proven in the same manner). For ν > 00$]]>, we see N x 2 4 A ν k = o ( k 3 5 ) for x T N since γ < 2 / 5. Therefore, (3.3) yields | λ ν k ( N x ) | = N ν / 2 | x | ν e N x 2 2 | L ν k ( N x 2 ) | C N ν 2 1 3 for some constant C. Here, we used the fact that Ai is bounded on R and Bi is bounded on ( , 0 ) (see [4, Section 9], for example). This with Lemma 3.2 yields (3.26). Furthermore, combining Lemma 3.1 with (3.4) and (3.5), we get (3.27).  □

The logarithmic derivative Finite particle approximation of the logarithmic derivative

The logarithmic derivative d μ of μ can be approximated by that of finite particle systems. Let { μ N } N N be a sequence of random point fields such that μ N ( s ( S ) = N ) = 1. Moreover, ρ n and ρ N , n stand for the n-correlation function of μ and μ N , respectively. Then, we assume the following:

For each R N, lim N ρ N , n = ρ n uniformly on S R n for each n N , sup N N sup x n S R n ρ N , n ( x n ) c 7 n n c 8 n for any n N , where 0 < c 7 ( R ) < and 0 < c 8 ( R ) < 1 are constants independent of n.

Note that (B1) implies lim N μ N = μ weakly.

Let u , u N : S R and g : S 2 R be measurable functions. For r > 00$]]>, we set g r ( x , y ) = i χ r ( x y i ) g ( x , y i ) , w r ( x , y ) = i ( 1 χ r ( x y i ) ) g ( x , y i ) , where y = i δ y i and χ r C 0 ( S ) is a cut-off function such that 0 χ r 1, χ r ( x ) = 0, for | x | r + 1 and χ r ( x ) = 1 for | x | r. We set S ˜ R = { x ; R 1 | x | R }. Following , we make assumption (B2). For each N, μ N has the logarithmic derivative d μ N such that d μ N ( x , y ) = u N ( x ) + g r ( x , y ) + w r ( x , y ) . Furthermore, u N , g r , and w r satisfy the following (i)–(iii) for some p ˆ > 11$]]>:

It holds that u N C 1 ( S { 0 } ). Furthermore, u N and u N converge uniformly to u and u on each compact set in S { 0 }, respectively.

It holds that g C 1 ( S 2 { x y } ). In addition, for each R N, lim p lim sup N x S ˜ R , | x y | 2 p χ r ( x y ) | g ( x , y ) | p ˆ ρ x N , 1 ( y ) d x d y = 0 , where ρ x N , 1 is the one-correlation function of the reduced Palm measure μ x N .

For each R N, lim r lim sup N S ˜ R × S | w r ( x , y ) | p ˆ d μ N , [ 1 ] = 0 .

Then, we obtain an explicit expression of the logarithmic derivative of μ by finite particle approximation. ([<xref ref-type="bibr" rid="j_vmsta193_ref_025">25</xref>, Theorem 45]).

Assume (B1) and (B2). Then, the logarithmic derivative d μ of μ exists in L loc p ( μ [ 1 ] ) for 1 p < p ˆ, and it is represented as d μ ( x , y ) = u ( x ) + lim r g r ( x , y ) . Here, the convergence lim r g r takes place in L loc p ( μ [ 1 ] ).

The logarithmic derivative of the generalised sine random point field

Let μ α N be the determinantal random point field whose kernel is K α N as in (1.4). In other words, the (labelled) density of μ α N is given by m α N ( d x N ) = 1 Z 1 i < j N | x i x j | 2 k = 1 N | x k | 2 α e x k 2 N d x N . Let ρ α N , n be the n-correlation function of μ α N . Furthermore, let μ α , x N be the reduced Palm measure of μ α N conditioned at x, and ρ α , x N , n denotes its n-correlation function. We shall apply the general result in Section 4.1 taking μ = μ α and μ N = μ α N .

From (4.5), the logarithmic derivative of μ α N is given by d μ α N ( x , y ) = j 2 x y j + 2 α x 2 x N for y = i = 1 N 1 δ y j . Then, we shall show (B2) under the setting of u N ( x ) = 2 α x 2 x N , g ( x , y ) = 2 x y . The most tough assumption in Lemma 4.1 is (4.4). We introduce a sufficient condition for (4.4). We set | | · | | R = sup x S ˜ R | · |.

We set S x , r = { y S ; r | x y | < }. The following conditions (4.7)(4.10) imply (4.4) with p ˆ = 2: lim r lim sup N | | S x , r ρ α N , 1 ( y ) x y d y | | R = 0 , lim r lim sup N | | S x , r ρ α , x N , 1 ( y ) ρ α N , 1 ( y ) x y d y | | R = 0 , lim r lim sup N | | S x , r ρ α N , 1 ( y ) ( x y ) 2 d y + ( S x , r ) 2 ρ α N , 2 ( y , z ) ρ α N , 1 ( y ) ρ α N , 1 ( z ) ( x y ) ( x z ) d y d z | | R = 0 , lim r lim sup N | | S x , r ρ α , x N , 1 ( y ) ρ α N , 1 ( y ) ( x y ) 2 d y + ( S x , r ) 2 ρ α , x N , 2 ( y , z ) ρ α , x N , 1 ( y ) ρ α , x N , 1 ( z ) ρ α N , 2 ( y , z ) + ρ α N , 1 ( y ) ρ α N , 1 ( z ) ( x y ) ( x z ) d y d z | | R = 0 .

This lemma follows from [25, Lemma 52, Lemma 53]. Remark that, in , the sup norm | | · | | R is defined as sup x S R | | · | |. Here, it is taken over S ˜ R instead of S R , considering the space of test functions of the logarithmic derivative in Definition 2.9.  □

Proof of Theorem <xref rid="j_vmsta193_stat_013">2.11</xref>

We begin by rewriting the conditions in Lemma 4.2 in terms of determinantal kernels. We recall that reduced Palm measures of determinantal random point fields are also determinantal. From , μ α , x N is determinantal, and its kernel K α , x N is given by K α , x N ( y , z ) = K α N ( y , z ) K α N ( x , y ) K α N ( x , z ) K α N ( x , x ) . With the aid of (4.11), we see that ρ α N , 1 ( y ) = K α N ( y , y ) ρ α , x N , 1 ( y ) ρ α N , 1 ( y ) = K α N ( x , y ) 2 K α N ( x , x ) ρ α N , 2 ( y , z ) ρ α N , 1 ( y ) ρ α N , 1 ( z ) = K α N ( y , z ) 2 ρ α , x N , 2 ( y , z ) ρ α , x N , 1 ( y ) ρ α , x N , 1 ( z ) { ρ α N , 2 ( y , z ) ρ α N , 1 ( y ) ρ α N , 1 ( z ) } = 2 K α N ( y , z ) K α N ( x , y ) K α N ( x , z ) K α N ( x , x ) K α N ( x , y ) 2 K α N ( x , z ) 2 K α N ( x , x ) 2 . Thereby, (4.7)–(4.10) in Lemma 4.2 are rewritten as follows.

To simplify notation, we set x N = x N , T x , r N = { y R ; r N | x N y | < } . Then, (4.7)(4.10) are equivalent to (4.16)(4.19), respectively. lim r lim sup N | | T x , r N M α N ( y , y ) x N y d y | | R = 0 , lim r lim sup N | | T x , r N 1 x N y M α N ( x N , y ) 2 M α N ( x N , x N ) d y | | R = 0 , lim r lim sup N | | T x , r N M α N ( y , y ) N | x N y | 2 d y ( T x , r N ) 2 M α N ( y , z ) 2 ( x N y ) ( x N z ) d y d z | | R = 0 , lim r lim sup N | | T x , r N 1 N | x N y | 2 M α N ( x N , y ) 2 M α N ( x N , x N ) d y + ( T x , r N ) 2 1 ( x N y ) ( x N z ) ( 2 M α N ( y , z ) M α N ( x N , y ) M α N ( x N , z ) M α N ( x N , x N ) M α N ( x N , y ) 2 M α N ( x N , z ) 2 M α N ( x N , x N ) 2 ) d y d z | | R = 0 .

Recall that M α N ( x , y ) = K α N ( N x , N y ) in (3.19). Then, by (4.12) and the change of variables y N y, we have that S x , r ρ α N , 1 ( y ) x y d y = T x , r N K α N ( N y , N y ) x N y N d y = T x , r N M α N ( y , y ) x N y d y . Therefore, we obtain the equivalence between (4.7) and (4.16). By the same argument with (4.13)–(4.15), it holds that (4.8)–(4.10) are equivalent to (4.17)–(4.19), respectively.  □

We shall prove (4.16)–(4.19) in order, in the rest of this section.

For 0 < q 1, we have lim N | | T N U 2 N M α N ( y , y ) q | x N y | d y | | R = 0 .

From (3.26), we have | | T N M α N ( y , y ) q | x N y | d y | | R | | T N c 4 q N q / 3 | x N y | d y | | R = c 4 q N q / 3 ( log | x N ( 2 N γ ) | log | x N ( 2 + N γ ) | + log | x N ( 2 N γ ) | log | x N ( 2 + N γ ) | ) = O ( N q 3 + γ ) 0 as N . Here, we used the fact that q 1 and γ < 2 / 5 in (3.20) in the last line. Moreover, from (3.22) and 1 / 2 < γ, we have | | U 2 N M α N ( y , y ) q | x N y | d y | | R | | U 2 N c 3 q | x N y | N q | y 2 2 | 2 q d y | | R 0 as N . Hence, from (4.21) and (4.22), we have (4.20).  □

Equation (4.16) holds.

Our proof starts with the observation that (3.21) yields lim r lim sup N | | T x , r N U 1 N M α N ( y , y ) x N y d y | | R = 0 . Indeed, we see that lim N | | T x , r N U 1 N 1 x N y 2 y 2 π d y | | R = | P . V . 2 2 1 y 2 y 2 π d y | = 0 , and, from 1 / 2 < γ, lim N | | T x , r N U 1 N 1 | x N y | 1 N 1 + 2 γ d y | | R = 0 . Moreover, lim N | | T x , r N U 1 N 1 | x N y | 1 N | y | d y | | R = | | 1 x log x + r r x | | R . From (4.24), (4.25), and (4.26) with (3.21), we have (4.23). Therefore, (4.16) follows from (4.20) with q = 1 and (4.23).  □

Applying the Schwartz inequality to (1.3), we have that K G , α N ( x , y ) 2 K G , α N ( x , x ) K G , α N ( y , y ), which yields M α N ( x , y ) 2 M α N ( x , x ) M α N ( y , y ) .

From S ˜ R = { R 1 | x | R }, Lemma 3.3 yields 0 < inf x S ˜ R K α ( x , x ) . Then, since M α N ( x N , x N ) = K α N ( x , x ) and lim N K α N ( x , x ) = K α ( x , x ) uniformly for x S ˜ R , we see that c 9 below is finite for each R N: c 9 = sup N N sup x S ˜ R 1 M α N ( x N , x N ) < .

Let f ( N , y ) O ( ( N | y | ) 1 ) as N | y | . Then, there exist positive constants C and L such that | f ( N , y ) | C ( N | y | ) 1 for N , y satisfying N | y | > LL\$]]>. From this, for sufficiently large r, we have sup y T x , r N , N N , x S ˜ R | f ( N , y ) | < . Thus, from (3.25) and (3.27), there exists a positive constant c 10 independent of N and r such that M α N ( y , y ) c 10 for any y T x , r N U N , | M α N ( x N , y ) | c 10 N | x N y | | 2 y 2 | 1 / 4 for any x S ˜ R , y T x , r N U N , | M α N ( y , z ) | c 10 N | y z | | 2 y 2 | 1 / 4 | 2 z 2 | 1 / 4 for any y , z T x , r N U N .

The following (4.32) and (4.33) hold. lim r lim sup N | | T x , r N M α N ( x N , y ) 2 | x N y | M α N ( x N , x N ) d y | | R = 0 , lim r lim sup N | | T x , r N M α N ( x N , y ) | x N y | M α N ( x N , x N ) d y | | R = 0 . In particular, (4.17) holds.

Equation (4.17) follows from (4.32) immediately. Then, we first prove (4.32). From (4.27) and (4.20) with q = 1, we deduce that lim N | | T N U 2 N M α N ( x N , y ) 2 | x N y | M α N ( x N , x N ) d y | | R lim N | | T N U 2 N M α N ( y , y ) | x N y | d y | | R = 0 . From (4.28) and (4.30), we have, for large r, lim sup N | | T x , r N U 1 N M α N ( x N , y ) 2 | x N y | M α N ( x N , x N ) d y | | R lim sup N | | T x , r N U 1 N c 9 c 10 2 N 2 | x N y | 3 ( 2 y 2 ) 1 / 2 d y | | R c 9 c 10 2 r 2 . Here, we used the fact that | 2 y 2 | N γ for y U 1 N . Combining (4.34) with (4.35), we get (4.32).

By the same argument as above, (4.28) and (4.30) yield that lim sup N | | T x , r N U 1 N M α N ( x N , y ) | x N y | M α N ( x N , x N ) d y | | R lim sup N | | T x , r N U 1 N c 9 c 10 N | x N y | ( 2 y 2 ) 1 / 4 d y | | R 2 c 9 c 10 r . Furthermore, from (4.27), (4.28), and (4.20) with q = 1 / 2, we see that lim N | | T N U 2 N M α N ( x N , y ) | x N y | M α N ( x N , x N ) d y | | R lim N | | T N U 2 N c 9 1 2 M α N ( y , y ) 1 2 | x N y | d y | | R = 0 . Hence, (4.37) and (4.36) yield (4.33).  □

The following (4.38) and (4.39) hold: lim r lim sup N | | T x , r N M α N ( y , y ) N | x N y | 2 d y | | R = 0 , lim r lim sup N | | ( T x , r N ) 2 M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R = 0 . In particular, (4.18) holds.

From (4.29), we see that lim sup N | | T x , r N U N M α N ( y , y ) N | x N y | 2 d y | | R 2 c 10 r . Define a positive constant c 11 as c 11 = lim sup N sup y T N | | | x N y | 1 | | R . Clearly, c 11 < . From (3.26), we have lim N | | T x , r N T N M α N ( y , y ) N | x N y | 2 d y | | R lim N 4 c 4 c 11 2 N 1 / 3 N γ N = 0 . Here, we used the fact that | T N | = 4 N γ and γ < 2 / 5. Therefore, (4.38) follows from (4.40) and (4.41).

Next we prove (4.39). First, we estimate the integration on U N × U N . We can see lim r lim sup N | | ( T x , r N U N ) 2 { | y z | N 1 } M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R = 0 , lim r lim sup N | | ( T x , r N U N ) 2 { | y z | N 1 } M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R = 0 . Actually, from (4.31) and the Schwartz inequality, we have | | ( T x , r N U N ) 2 { | y z | N 1 } M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R | | ( T x , r N U N ) 2 { | y z | N 1 } × c 10 2 N 2 | y z | 2 | 2 y 2 | 1 / 2 | 2 z 2 | 1 / 2 | x N y | | x N z | d y d z | | R | | ( T x , r N U N ) 2 { | y z | N 1 } c 10 2 N 2 | y z | 2 | 2 y 2 | | x N y | 2 d y d z | | R | | T x , r N U N c 10 2 N 2 | 2 y 2 | | x N y | 2 { { | y z | N 1 } 1 | y z | 2 d z } d y | | R = | | T x , r N U N 2 c 10 2 N | 2 y 2 | | x N y | 2 d y | | R . Hence, computation similar to (4.35) derives lim sup N | | ( T x , r N U N ) 2 { | y z | N 1 } M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R 4 c 10 2 r , which implies (4.42). We next show (4.43). From (4.27) and (4.29), it holds that | | ( T x , r N U N ) 2 { | y z | N 1 } M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R | | ( T x , r N U N ) 2 { | y z | N 1 } c 10 2 | x N y | | x N z | d y d z | | R | | ( T x , r N U N ) 2 { | y z | N 1 } c 10 2 2 { 1 | x N y | 2 + 1 | x N z | 2 } d y d z | | R 2 c 10 2 N | | T x , r N 1 | x N y | 2 d y | | R = 4 c 10 2 r , which implies (4.43). Then, (4.42) and (4.43) gives lim r lim sup N | | ( T x , r N U N ) 2 M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R = 0 .

We next consider the integration on T N × T N . From (3.26) and (4.27), we obtain lim N | | ( T x , r N T N ) 2 M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R lim N c 4 2 c 11 2 N 2 / 3 ( 4 N γ ) 2 = 0 . Here, we used | T N | = 4 N γ and γ < 2 / 5.

Lastly, we consider the case U N × T N . A similar argument deduces | | ( T x , r N U N ) × ( T x , r N T N ) M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R | | ( T x , r N U N ) × ( T x , r N T N ) M α N ( y , y ) M α N ( z , z ) | x N y | | x N z | d y d z | | R = | | T x , r N U N M α N ( y , y ) | x N y | d y T x , r N T N M α N ( z , z ) | x N z | d z | | R = O ( log N ) O ( N 1 / 3 + γ ) 0 as N . Collecting (4.44), (4.45), and (4.46), we finally obtain (4.39). Clearly, (4.18) follows from (4.38) and (4.39).  □

Equation (4.19) holds.

We estimate three terms in (4.19).

The first term in (4.19) vanishes. Indeed, from (4.27) and (4.38), we obtain lim r lim sup N | | T x , r N 1 N | x N y | 2 M α N ( x N , y ) 2 M α N ( x N , x N ) d y | | R lim r lim sup N | | T x , r N M α N ( y , y ) N | x N y | 2 d y | | R = 0 .

For the second term in (4.19), from the Schwartz inequality, we have | | ( T x , r N ) 2 M α N ( y , z ) M α N ( x N , y ) M α N ( x N , z ) | x N y | | x N z | M α N ( x N , x N ) d y d z | | R = | | ( T x , r N ) 2 M α N ( y , z ) 2 | x N y | | x N z | d y d z | | R 1 2 | | T x , r N M α N ( x N , y ) 2 | x N y | M α N ( x N , x N ) d y | | R . Thereby, (4.32) and (4.39) implies lim r lim sup N | | ( T x , r N ) 2 M α N ( y , z ) M α N ( x N , y ) M α N ( x N , z ) | x N y | | x N z | M α N ( x N , x N ) d y d z | | R = 0 .

Lastly, the third term in (4.19) converges to zero. Indeed, from (4.32), we see lim r lim sup N | | ( T x , r N ) 2 M α N ( x N , y ) 2 M α N ( x N , z ) 2 | x N y | | x N z | M α N ( x N , x N ) 2 d y d z | | R = lim r lim sup N | | T x , r N M α N ( x N , y ) 2 | x N y | M α N ( x N , x N ) d y | | R 2 = 0 . Therefore, we conclude (4.19) from (4.47), (4.48), and (4.49).  □

It is easy to see (B1). Indeed, (4.1) is verified by (1.6), and (4.2) follows from (1.9) and the Hadamard inequality. Additionally, (B2) holds for p ˆ = 2 under the setting of (4.6). (B2) (i) is clear, and direct computation with (4.11) yields (4.3) in (ii). Furthermore, we have (B2) (iii), since (4.16)–(4.19) in Lemma 4.3 are followed from Lemma 4.5, Lemma 4.6, Lemma 4.7, and Lemma 4.8, respectively. Then, we have checked the all conditions in Lemma 4.1, which completes the proof of Theorem 2.11.  □

Quasi-Gibbs property Sufficient conditions for the quasi-Gibbs property

To show Theorem 2.6, we use sufficient conditions for the quasi-Gibbs property of a random point field μ with logarithmic interaction which is derived in . We set the following conditions (QG1)(QG2):

There exists a sequence of random point fields { μ N } N N satisfying (B1) and the following (i)–(iii):

μ N ( s ( S ) = N ) = 1 for each N N.

μ N is a ( Φ N , β log | x y | )-canonical Gibbs measure for each N N.

For each R, self-potential Φ N satisfies the following: lim N Φ N ( x ) = Φ ( x ) for a.e. x , inf n N inf x S R Φ N ( x ) > . -\infty .\]]]>

Let x = i δ x i . For l , r N, we define v l , r : S R as v l , r ( x ) = β x i S r c 1 x i l .

There exists l 0 N such that sup N N { 1 | x | < 1 | x | l 0 ρ N , 1 ( x ) d x } < and that, for each 1 l < l 0 , lim r sup N N | | v l , r | | L 1 ( S , μ N ) = 0 .

[27, Theorem 2.2] Assume (QG1) and (QG2). Then, μ is a ( Φ , β log | x y | )-quasi-Gibbs measure.

Proof of Theorem <xref rid="j_vmsta193_stat_006">2.6</xref>

We first check (QG2) to use Lemma 5.1.

It holds that sup N N { 1 | x | < 1 x 2 ρ α N , 1 ( x ) d x } < , lim r sup N N | | v 1 , r | | L 1 ( S , μ α N ) = 0 . In particular, we have (QG2) for l 0 = 2.

Recall ρ α N , 1 ( x ) = K α N ( x , x ) = M α N ( x / N , x / N ). Then, from (3.25), there exists a positive constant c 12 such that sup N N sup 1 | x | N ρ α N , 1 ( x ) c 12 . Therefore, we have 1 | x | < 1 | x | 2 ρ α N , 1 ( x ) d x 2 c 12 1 N 1 x 2 d x + 2 N ρ α N , 1 ( x ) x 2 d x . Using S ρ α N , 1 ( x ) d x = N, we see that N ρ α N , 1 ( x ) x 2 d x 1 N 2 N ρ α N , 1 ( x ) d x 1 N . Thereby, combining this with (5.3), we obtain (5.1). Moreover, we see that | | v 1 , r | | L 1 ( S , μ α N ) = | x | > r 2 ρ α N , 1 ( x ) x d x = | x | > r N 2 M α N ( x , x ) x d x . r}}\frac{2{\rho _{\alpha }^{N,1}}(x)}{x}dx={\int _{|x|>\frac{r}{N}}}\frac{2{M_{\alpha }^{N}}(x,x)}{x}dx.\]]]> Then, by arguments similar to that yield Lemma 4.5, we prove (5.2). Therefore, we obtain (QG2) for l 0 = 2.  □

Taking μ N = μ α N and μ = μ α , we check assumption (QG1) and (QG2). It is easily to see (QG1) with Φ N ( x ) = x 2 N 1 2 α log | x |. Furthermore, (QG2) holds for l 0 = 2 from Lemma 5.2. Hence, we conclude Theorem 2.6 from Lemma 5.1.  □

Strong uniqueness General framework for strong uniqueness

Following , we introduce a framework of strong uniqueness of ISDEs in the case when S = R and the diffusion coefficient σ = 1. In order not to make this paper too long, several terminologies and symbols are not defined here. See  for precise definitions.

Recall that S s , i is defined in (2.1). Let H and S sde be Borel subsets of S such that H S sde S s , i . Define S sde S N and S sde [ 1 ] S × S as S sde = u 1 ( S sde ) , S sde [ 1 ] = u [ 1 ] 1 ( S sde ) , where u [ 1 ] : S × S S is given by u [ 1 ] ( x , s ) = δ x + s.

Let ( X , B ) be an S N × R N -valued continuous process defined on a filtered space ( Ω , F , P , { F t } ). We assume that ( Ω , F , P ) is a standard probability space. Then, the regular conditional probability P s = P ( · | X 0 = s ) exists for s, P X 0 1 -almost everywhere.

Let b : S sde [ 1 ] R be a Borel measurable function. Then, consider an ISDE of X = ( X i ) i N starting from l ( H ) with state space S sde : d X t i = d B t i + b ( X t i , X t i ) d t , X W ( S sde ) , X 0 l ( H ) .

For X = ( X i ) i N , we set X t m = i = m + 1 δ X t i . For ( u , v ) S m and v = i = 1 m 1 δ v i , where v = ( v 1 , , v m 1 ), we define b X m : [ 0 , ) × S m R as b X m ( t , ( u , v ) ) = b ( u , v + X t m ) . Let S sde m ( t , w ) = { s m = ( s 1 , , s m ) S m ; u ( s m ) + w t m S sde } , where w t m = i = m + 1 δ w t i for w t = i = 1 δ w t i . Let Y m = ( Y 1 , , Y m ) be a solution to the following SDE with random environment X defined on ( Ω , F , P s , { F t } ): d Y t m , i = d B t i + b X m ( t , ( Y t m , i , Y t m , i ) ) d t , Y t m S sde m ( t , X ) for all t , Y t m = s m . Here, we set Y t i = ( Y m , j ) j i m and s m = ( s 1 , , s m ) for s = ( s i ) S N .

We formulate strong solution to (6.3)–(6.5) and its uniqueness. We set W 0 ( R m ) = { w W ( R m ) ; w 0 = 0 }. Let C m , C t m be σ-fields on W 0 ( R m ) × W ( S N ) defined as in [29, 1154 p.]. Let B t m = σ [ w s ; 0 s t , w W ( R m ) ].

([<xref ref-type="bibr" rid="j_vmsta193_ref_029">29</xref>, Definition 3.9]).

We say that Y m a strong solution to (6.3)–(6.5) for ( X , B ) under P s if ( Y m , B m , X m ) satisfies (6.3)–(6.5) and there exists a function F s m : W 0 ( R m ) × W ( S N ) W ( S m ) such that C m -measurable and C t m / B t m -measurable for any t, and it holds that Y m = F s m ( B m , X m ) for P s -a.s. Here, B m = ( B 1 , , B m ) is the first m-components of the Brownian motion B.

([<xref ref-type="bibr" rid="j_vmsta193_ref_029">29</xref>, Definition 3.10]).

SDE (6.3)–(6.5) is said to have a unique strong solution for ( X , B ) under P s if there exists a function F s m satisfying the same condition as in Definition 6.1 and, for any weak solution ( Y ˆ m , B m , X m ) to (6.3)–(6.5) under P s , we have Y ˆ m = F s m ( B m , X m ) .

Then, we introduce the IFC condition for ( X , B ) defined on ( Ω , F , P , { F t } ).

For each m N, SDE (6.3)–(6.5) has a unique strong solution F s m ( B m , X m ) for ( X , B ) under P s for P X 0 1 -a.s. s.

We next introduce several conditions for the strong uniqueness of solutions to ISDEs. See [29, Section 3] for the definitions of solutions to ISDEs such as strong solutions and unique strong solutions under constraints. For a process X = ( X i ) i N W ( S N ), let X t = i δ X t i . We make assumptions for μ and X under P.

μ is tail trivial.

P X t 1 μ for all 0 < t < .

P ( X W NE ( S s , i ) ) = 1.

P ( m r , T ( X ) < ) = 1 for each r , T N, where, for w W ( S N ), m r , T ( w ) = inf { m N ; min t [ 0 , T ] | w t n | > r for all n N such that n > m } . r\hspace{2.5pt}\text{for all}\hspace{2.5pt}n\in \mathbb{N}\hspace{2.5pt}\text{such that}\hspace{2.5pt}n>m\}.\]]]>

Let F s : W 0 ( R N )