VMSTA Modern Stochastics: Theory and Applications 2351-60542351-60462351-6046 VTeXMokslininkų g. 2A, 08412 Vilnius, Lithuania VMSTA213 10.15559/22-VMSTA213 Research Article Asymptotic properties of the parabolic equation driven by stochastic measure https://orcid.org/0000-0002-6954-9806 ManikinBorisbmanikin@gmail.com Taras Shevchenko National University of Kyiv, Kyiv, Ukraine 2022 69202200116204202220820222382022 © 2022 The Author(s). Published by VTeX2022 Open access article under the CC BY license.

A stochastic parabolic equation on [ 0 , T ] × R driven by a general stochastic measure, for which we assume only σ-additivity in probability, is considered. The asymptotic behavior of its solution as t is studied.

Stochastic measure mild solution stochastic parabolic equation asymptotic behavior 60G57
Introduction

In this paper we consider the stochastic parabolic equation L u ( t , x ) d t + f ( t , x , u ( t , x ) ) d t + σ ( t , x ) d μ ( x ) = 0 , u ( 0 , x ) = u 0 ( x ) , where ( t , x ) [ 0 , T ] × R, μ is a general stochastic measure on the Borel σ-algebra on R (see Section 2), f, σ are measurable functions, L is the operator of the kind L u ( t , x ) = a ( t ) 2 u ( t , x ) 2 x + b ( t ) u ( t , x ) x + c ( t ) u ( t , x ) u ( t , x ) t . Here a, b, c are defined on [ 0 , T ]. We prove that under certain conditions on a, b, c, f, σ the solution of (1), considered in the mild sense, tends to 0 a.s. uniformly on x. Note that regularity of the solution was proved in , the solution’s convergence in the case of integrator’s convergence was proved in  and the averaging principle for such an equation was established in .

The asymptotic behavior of the moments of solutions of a stochastic differential system driven by a Brownian motion was considered in . The problem of the convergence of the solution of a nonautonomous logistic differential equation to zero as time coordinate goes to infinity, with disturbance of coefficients by white noise, centered and noncentered Poisson noises, was studied in . Asymptotics of the solution of the stochastic heat equation with white noise, as time variable goes to infinity for the fixed spatial coordinate, was studied in  while asymptotic properties of the solution of the stochastic wave equation driven by a Lévy process were given in . Behavior of solutions of different equations with a general stochastic measure when spatial coordinate goes to infinity was considered in  and . In comparison to , where asymptotics of the heat equation driven by a general stochastic measure when time coordinate tends to infinity was considered, we study a more general parabolic equation.

The paper is organized as follows. Section 2 contains some general facts about stochastic measures and integrals with respect to them. In Section 3 we prove some technical facts and formulate the main result, which is proved in Section 4 jointly with the auxillary lemma.

Preliminaries

Let ( Ω , F , P ) be a complete probability space and B be a Borel σ-algebra on R. Denote by L 0 = L 0 ( Ω , F , P ) the set of all real-valued random variables defined on ( Ω , F , P ). Convergence in L 0 means the convergence in probability.

A σ-additive mapping μ : B L 0 is called stochastic measure (SM).

In other words, μ is a vector measure with values in L 0 . We do not assume any martingale properties or moment existence for SM.

Consider some examples of SMs. If M t is a square integrable martingal then μ ( A ) = 0 T 1 A ( t ) d M t is an SM. An α-stable random measure on B for α ( 0 , 1 ) ( 1 , 2 ], as it is defined in [19, Sections 3.2–3.3], is an SM in sense of Definition 1. For a fractional Brownian motion W t H with Hurst index  H > 1 / 21/2$]]> and a bounded measurable function f : [ 0 , T ] R we can define an SM μ ( A ) = 0 T f ( t ) 1 A ( t ) d W t H , see [13, Theorem 1.1]. Ref.  contains some other examples. The integral A g d μ, where g : R R is a deterministic measurable function, A B and μ is an SM, is defined and its basic properties are given in [11, Chapter 7]. In that paper the integral with respect to general stochastic measure was constructed and studied for μ defined on an arbitrary σ-algebra, but here we consider SM on Borel subsets of R. Note that every bounded measurable g is integrable with respect to (w.r.t.) any μ. In the sequel, μ denotes an SM, C and C ( ω ) denote positive constant and positive random constant, respectively, whose exact values are not important ( C < , C ( ω ) < a.s.). We use the following statement. (Lemma 3.1 in [<xref ref-type="bibr" rid="j_vmsta213_ref_014">14</xref>]). Let ϕ l : R R, l 1, be measurable functions such that ϕ ˜ ( x ) = l = 1 | ϕ l ( x ) | is integrable w.r.t. μ on R. Then l = 1 ( R ϕ l d μ ) 2 < a.s. We consider the Besov spaces B 22 α ( [ c , d ] ), 0 < α < 1, with a standard norm g B 22 α ( [ c , d ] ) = g L 2 ( [ c , d ] ) + ( 0 d c ( w 2 ( g , r ) ) 2 r 2 α 1 d r ) 1 / 2 , where w 2 ( g , r ) = sup 0 h r ( c d h | g ( y + h ) g ( y ) | 2 d y ) 1 / 2 . For any j Z and all n 0, put d k n ( j ) = j + k 2 n , 0 k 2 n , Δ k n ( j ) = ( d ( k 1 ) n ( j ) , d k n ( j ) ] , 1 k 2 n . The following lemma is a key tool for estimates of the stochastic integral. (Lemma 3 in [<xref ref-type="bibr" rid="j_vmsta213_ref_016">16</xref>]). Let Z be an arbitrary set, and the function q ( z , s ) : Z × [ j , j + 1 ] R be such that all paths q ( z , · ) are continuous on [ j , j + 1 ]. Denote q n ( z , s ) = 1 k 2 n q ( z , d ( k 1 ) n ( j ) ) 1 Δ k n ( j ) ( s ) . Then the random function η ( z ) = ( j , j + 1 ] q ( z , s ) d μ ( s ) , z Z , has a version η ˜ ( z ) = ( j , j + 1 ] q 0 ( z , s ) d μ ( s ) + n 1 ( ( j , j + 1 ] q n ( z , s ) d μ ( s ) ( j , j + 1 ] q n 1 ( z , s ) d μ ( s ) ) such that for all β > 00$]]>, ω Ω, z Z | η ˜ ( z ) | | q ( z , j ) μ ( ( j , j + 1 ] ) | + { n 1 2 n β 1 k 2 n | q ( z , d k n ( j ) ) q ( z , d ( k 1 ) n ( j ) ) | 2 } 1 / 2 × { n 1 2 n β 1 k 2 n | μ ( Δ k n ( j ) ) | 2 } 1 / 2 .

From Theorem 1.1  it follows that, for α = ( β + 1 ) / 2, { n 1 2 n β 1 k 2 n | q ( z , d k n ( j ) ) q ( z , d ( k 1 ) n ( j ) ) | 2 } 1 / 2 C q ( z , · ) B 22 α ( [ j , j + 1 ] ) .

Lemma 1 implies that for each β > 00$]]>, j Z n 1 2 n β 1 k 2 n | μ ( Δ k n ( j ) ) | 2 < + a . s . Formulation of the problem and the main result We refer to the mild solution to (1), i.e. the measurable random function u ( t , x ) = u ( t , x , ω ) : [ 0 , T ] × R × Ω R that satisfies u ( t , x ) = R p ( t , x ; 0 , y ) u 0 ( y ) d y + 0 t d s R p ( t , x ; s , y ) f ( s , y , u ( s , y ) ) d y + R d μ ( y ) 0 t p ( t , x ; s , y ) σ ( s , y ) d s , for each ( t , x ) [ 0 , + ) × R a.s. The properties of such solutions are considered in . For example, solution of (7) exists, is unique and can be built as u ( t , x ) = lim n u ( n ) ( t , x ) , where u ( 0 ) ( t , x ) = 0 and u ( n ) ( t , x ) = R p ( t , x ; 0 , y ) u 0 ( y ) d y + 0 t d s R p ( t , x ; s , y ) f ( s , y , u ( n 1 ) ( s , y ) ) d y + R d μ ( y ) 0 t p ( t , x ; s , y ) σ ( s , y ) d s . The analogous iteration process for the stochastic heat equation is considered in more detail in . Let the coefficients of operator (2) satisfy the following assumptions. Functions a ( t ), b ( t ), c ( t ) are continuous and bounded on [ 0 , + ), and | a ( t 1 ) a ( t 2 ) | L t 1 t 2 λ , a ( t ) δ , where t , t 1 , t 2 [ 0 , + ), L, λ, δ are positive constants. There exists a constant c 0 > 00$]]> such that c ( t ) c 0 t 0.

Assumption 1 implies that p ( t , x ; s , y ) = p ( t , x y ; s , 0 ) for each t , s [ 0 , + ), x , y R. We consider u 0 , f, σ in (7) under the following conditions.

u 0 : R × Ω R is measurable and for all y , y 1 , y 2 R | u 0 ( y , ω ) | C ( ω ) , | u 0 ( y 1 , ω ) u 0 ( y 2 , ω ) | L u 0 ( ω ) | y 1 y 2 | β ( u 0 ) , where C ( ω ), L u 0 ( ω ) are random constants, β ( u 0 ) 1 / 2.

f : R + × R × R R is continuous, bounded, and | f ( s , y 1 , v 1 ) f ( s , y 2 , v 2 ) | L f ( | y 1 y 2 | + | v 1 v 2 | ) for some constant L f and all s R + , y 1 , y 2 , v 1 , v 2 R.

σ : R + × R R is measurable and | σ ( s , y ) | C σ ( s ) , | σ ( s , y 1 ) σ ( s , y 2 ) | L σ ( s ) | y 1 y 2 | β ( σ ) , for some constant 1 / 2 < β ( σ ) < 1, all s R + , y 1 , y 2 R and bounded functions C σ , L σ : R + R such that C σ ( s ) 0 , s ; L σ ( s ) 0 , s .

To proceed further, we need some statements about L and p. The following lemma [6, Theorem 10 §1] is formulated for our specific L.

Assume that the function v ( t , x ) : [ 0 , T ] × R R is bounded, Assumptions 1, 2 hold, | v ( 0 , x ) | M 1 , | L v ( t , x ) | M 2 . Then | v ( t , x ) | e c 0 t ( M 1 + M 2 t ) .

There exist positive constants ν, η, C such that for each x , y R, t > s > 0s>0$]]> the following estimates hold: | p ( t , x ; s , y ) | C ( t s ) 1 / 2 e ν ( x y ) 2 t s e η ( t s ) E λ , λ + 1 ( C ˜ ( t s ) λ ) , | p ( t , x ; s , y ) x | C ( t s ) 1 e ν ( x y ) 2 t s e η ( t s ) E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) , | p ( t , x ; s , y ) x p ( t , x ; s , y ) x | C ( ϕ ) | x x | ϕ ( t s ) 3 / 2 × × max { e ν ( x y ) 2 t s , e ν ( x y ) 2 t s } e η ( t s ) E λ , λ ( C ˜ ( t s ) λ ) , where E α , β ( z ) = k = 0 z k Γ ( α k + β ) is the Mittag-Leffler function, ϕ < 1. We represent p as p ( t , x ; s , y ) = W ( t , x ; s , y ) + s t d θ R W ( t , x ; θ , ζ ) Φ ( θ , ζ ; s , y ) d ζ (see, for example, [6, (4.18)]). Here W ( t , x ; s , y ) = 1 4 π ( t s ) a ( s ) e ( x y ) 2 4 ( t s ) a ( s ) , the function Φ ( t , x ; s , y ) is a solution of the integral equation Φ ( t , x ; s , y ) = L W ( t , x ; s , y ) + s t d θ R L W ( t , x ; θ , ζ ) Φ ( θ , ζ ; s , y ) d ζ . It is easy to calculate that W ( t , x ; s , y ) x = 1 4 π ( t s ) a ( s ) e ( x y ) 2 4 ( t s ) a ( s ) y x 2 ( t s ) a ( s ) , 2 W ( t , x ; s , y ) x 2 = 1 4 π ( t s ) a ( s ) e ( x y ) 2 4 ( t s ) a ( s ) ( ( x y ) 2 4 ( t s ) 2 a 2 ( s ) 1 2 ( t s ) a ( s ) ) , W ( t , x ; s , y ) t = 1 4 π ( t s ) a ( s ) e ( x y ) 2 4 ( t s ) a ( s ) ( ( x y ) 2 4 ( t s ) 2 a ( s ) 1 2 ( t s ) ) . Using Assumption 1 and boundedness of the function x α e x on [ 0 , + ) for arbitrary α > 00$]]>, we easily obtain that a ( s ) 2 W x 2 W t = 0 , | W ( t , x ; s , y ) | C e ν ( x y ) 2 t s ( t s ) 1 / 2 , | W ( t , x ; s , y ) x | C e ν ( x y ) 2 t s ( t s ) 1 , | 2 W ( t , x ; s , y ) x 2 | C e ν ( x y ) 2 t s ( t s ) 3 / 2 , where 0 < ν < ( 4 sup t R a ( t ) ) 1 . The solution of (13) can be rewritten as Φ ( t , x ; s , y ) = k = 1 Φ k ( t , x ; s , y ) , where Φ 1 ( t , x ; s , y ) = L W ( t , x ; s , y ) , Φ k + 1 ( t , x ; s , y ) = s t d θ R L W ( t , x ; θ , ζ ) Φ k ( θ , ζ ; s , y ) d ζ . Using (14)–(17), we obtain that | Φ 1 ( t , x ; s , y ) | | a ( t ) a ( s ) | | 2 W ( t , x ; s , y ) x 2 | + | b ( t ) | | W ( t , x ; s , y ) x | + | c ( t ) | | W ( t , x ; s , y ) | C e ν ( x y ) 2 t s ( ( t s ) 3 / 2 + λ + ( t s ) 1 + ( t s ) 1 / 2 ) C e ν ( x y ) 2 t s ( t s ) 3 / 2 + λ e η 1 ( t s ) . Analogously to [6, (4.58)] we show that | Φ k ( t , x ; s , y ) | C Γ ( ( k 1 ) λ + λ ) ( C ˜ ( t s ) λ ) k 1 e ν ( x y ) 2 t s e η 1 ( t s ) ( t s ) 3 / 2 + λ , where constants C, C ˜ depend on λ. Taking the sum of (18) for each k 1, we get the inequality | Φ ( t , x ; s , y ) | C e ν ( x y ) 2 t s e η 1 ( t s ) ( t s ) 3 / 2 + λ E λ , λ ( C ˜ ( t s ) λ ) . The following inequality plays an important role in further estimates R ( t θ ) 1 / 2 ( θ s ) 1 / 2 e ν ( ( ζ y ) 2 θ s + ( x ζ ) 2 t θ ) d ζ = π ν ( t s ) 1 / 2 e ν ( x y ) 2 t s . Now we use (19) and (20) to obtain (10). | s t d θ R W ( t , x ; θ , ζ ) Φ ( θ , ζ ; s , y ) d ζ | C s t d θ R ( t θ ) 1 2 e ν ( x ζ ) 2 t θ E λ , λ ( C ˜ ( θ s ) λ ) ( θ s ) 3 / 2 λ e η 1 ( θ s ) e ν ( ζ y ) 2 θ s d ζ C e η 1 ( t s ) s t E λ , λ ( C ˜ ( θ s ) λ ) ( θ s ) 1 λ d θ R ( t θ ) 1 2 ( θ s ) 1 2 e ν ( ( ζ y ) 2 θ s + ( x ζ ) 2 t θ ) d ζ = C e η 1 ( t s ) ( t s ) 1 / 2 e ν ( x y ) 2 t s s t ( θ s ) 1 + λ E λ , λ ( C ˜ ( θ s ) λ ) d θ = C ( t s ) 1 / 2 + λ e ν ( x y ) 2 t s e η 1 ( t s ) E λ , λ + 1 ( C ˜ ( t s ) λ ) , where the last equality is a consequence of 0 z E ρ , μ ( λ t ρ ) t μ 1 d t = z μ E ρ , μ + 1 ( λ z ρ ) [8, chapter III, (1.15)]. From (15), (21) and the inequalities t s e η ( t s ) 1, E λ , λ + 1 ( C ˜ ( t s ) λ ) 1 Γ ( λ + 1 ) we obtain (10). We prove (11) analogously, using 1 π 0 z E ρ , μ ( λ t ρ ) ( z t ) 1 / 2 t μ 1 d t = z μ 1 / 2 E ρ , μ + 1 / 2 ( λ z ρ ) [8, chapter III, (1.16) with α = 1 / 2] instead of (22).

In the proof of (12) we use | p ( t , x ; s , y ) x p ( t , x ; s , y ) x | = | p ( t , x ; s , y ) x p ( t , x ; s , y ) x | 1 ( x x ) 2 < A ( t s ) + | p ( t , x ; s , y ) x p ( t , x ; s , y ) x | 1 ( x x ) 2 A ( t s ) , where A > 00$]]>. Firstly assume that ( x x ) 2 < A ( t s ); we prove that for such t, x, s | p ( t , x ; s , y ) x p ( t , x ; s , y ) x | C ( ϕ , A ) | x x | ϕ ( t s ) 3 / 2 × × e ν ( x y ) 2 t s e η ( t s ) E λ , λ ( C ˜ ( t s ) λ ) . Notice that | W ( t , x ; s , y ) x W ( t , x ; s , y ) x | C | x x 1 ( t s ) 3 / 2 e ν ( v y ) 2 t s d v | C 1 ( t s ) 3 / 2 e ν 1 ( x y ) 2 t s 0 | x x | 2 e ( ν ν 1 ) w 2 t s d w C 1 ( t s ) 3 / 2 e ν 1 ( x y ) 2 t s 0 | x x | 2 ( t s w 2 ) l d w C 1 ( t s ) 3 / 2 l e ν 1 ( x y ) 2 t s | x x | 1 2 l , where 0 l < 1 / 2, x = ϵ x + ( 1 ϵ ) x , 0 ϵ 1, 0 < ν 1 < ν; here we used the fact that the function x α e x is bounded on [ 0 , + ) for arbitrary α > 00$]]>. Now we show that ( x x ) 2 < A ( t s ) implies e ν 1 ( x y ) 2 t s C e ν 2 ( x y ) 2 t s , where ν 1 > ν 2 > 0{\nu _{2}}>0$]]>, C, ν 2 do not depend on y and x , but C depends on A. We consider two cases. | x y | 3 | x x |. We have the inequalities: e ν 1 ( x y ) 2 t s 1 e A e ( x x ) 2 t s e A e ( x y ) 2 9 ( t s ) = C 1 e ν 21 ( x y ) 2 t s . | x y | > 3 | x x |3|x-{x^{\prime }}|$]]>. In this case we have the estimates: e ν 1 ( x y ) 2 t s = e ν 1 ( x y ) 2 t s e ν 1 ( x x ) ( x + x 2 y ) t s e ν 1 ( x y ) 2 t s e ν 1 | x x | ( 2 | x y | + | x x | ) t s < e ν 1 ( x y ) 2 t s e 7 ν 1 ( x y ) 2 9 ( t s ) = e 2 ν 1 ( x y ) 2 9 ( t s ) = C 2 e ν 22 ( x y ) 2 t s .

Now we set C = max { C 1 , C 2 }, ν 2 = min { ν 21 , ν 22 } and obtain (25). Therefore, the following estimate holds: | W ( t , x ; s , y ) x W ( t , x ; s , y ) x | C 1 ( t s ) 3 / 2 l e ν 2 ( x y ) 2 t s | x x | 1 2 l . Consider the expression p ( t , x ; s , y ) x p ( t , x ; s , y ) x = ( W ( t , x ; s , y ) x W ( t , x ; s , y ) x ) + t | x x | 2 2 A t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ t | x x | 2 2 A t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ + s t | x x | 2 2 A d θ R ( W ( t , x ; θ , ζ ) x W ( t , x ; θ , ζ ) x ) Φ ( θ , ζ ; s , y ) d ζ = J 0 + J 1 + J 2 + J 3 . We estimate J 1 in the following way: | J 1 | = | t | x x | 2 2 A t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ | C t | x x | 2 2 A t d θ R e ν ( ( ζ y ) 2 θ s + ( x ζ ) 2 t θ ) E λ , λ ( C ˜ ( θ s ) λ ) ( t θ ) ( θ s ) 3 / 2 λ e η 1 ( θ s ) d ζ C e η 1 ( t s ) e ν ( x y ) 2 t s E λ , λ ( C ˜ ( t s ) λ ) ( t s ) 1 / 2 t | x x | 2 2 A t ( t θ ) 1 / 2 ( θ s ) λ 1 d θ C e η 1 ( t s ) e ν ( x y ) 2 t s E λ , λ ( C ˜ ( t s ) λ ) ( t s ) 1 / 2 ( t s | x x | 2 2 A ) λ 1 ( | x x | 2 2 A ) 1 / 2 C e η 1 ( t s ) e ν ( x y ) 2 t s ( t s ) 3 / 2 + λ E λ , λ ( C ˜ ( t s ) λ ) | x x | . Here we used the inequality t s | x x | 2 2 A > t s 2 \frac{t-s}{2}]]> and (20). We estimate J 2 in analogous way using (25): | J 2 | = | t | x x | 2 2 A t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ | C e η 1 ( t s ) e ν 2 ( x y ) 2 t s ( t s ) 3 / 2 + λ E λ , λ ( C ˜ ( t s ) λ ) | x x | . We apply (20) and (26) with A ˜ = 2 A to prove the estimation for J 3 : | J 3 | s t | x x | 2 2 A d θ R | W ( t , x ; θ , ζ ) x W ( t , x ; θ , ζ ) x | | Φ ( θ , ζ ; s , y ) | d ζ C | x x | 1 2 l s t | x x | 2 2 A d θ R e ν 2 ( ( ζ y ) 2 θ s + ( x ζ ) 2 t θ ) E λ , λ ( C ˜ ( θ s ) λ ) ( t θ ) 3 / 2 l ( θ s ) 3 / 2 λ e η 1 ( θ s ) d ζ C | x x | 1 2 l e η 1 ( t s ) e ν 2 ( x y ) 2 t s E λ , λ ( C ˜ ( t s ) λ ) ( t s ) 1 / 2 s t ( t θ ) l 1 ( θ s ) λ 1 d θ C | x x | 1 2 l e η 1 ( t s ) e ν 2 ( x y ) 2 t s ( t s ) 3 / 2 + l + λ E λ , λ ( C ˜ ( t s ) λ ) , for arbitrary l ( 0 , 1 / 2 ). Now (24) follows from (26), (27), (28) and (29). Let ( x x ) 2 A ( t s ). This implies | W ( t , x ; s , y ) x W ( t , x ; s , y ) x | | W ( t , x ; s , y ) x | + | W ( t , x ; s , y ) x | C ( t s ) 1 max { e ν ( x y ) 2 t s , e ν ( x y ) 2 t s } C ( t s ) 3 / 2 + l ( | x x | A ) 1 2 l max { e ν ( x y ) 2 t s , e ν ( x y ) 2 t s } , where l ( 0 , 1 ). On the other hand, | s t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ | C s t d θ R e ν ( ( ζ y ) 2 θ s + ( x ζ ) 2 t θ ) E λ , λ ( C ˜ ( θ s ) λ ) ( t θ ) ( θ s ) 3 / 2 λ e η 1 ( θ s ) d ζ C e η 1 ( t s ) e ν ( x y ) 2 t s E λ , λ ( C ˜ ( t s ) λ ) ( t s ) 1 / 2 ( | x x | A ) 1 2 l s t ( θ s ) 1 + λ ( t θ ) 1 + l d θ = C e η 1 ( t s ) ( t s ) 3 / 2 + l + λ e ν ( x y ) 2 t s ( | x x | A ) 1 2 l E λ , λ ( C ˜ ( t s ) λ ) , and the same estimates hold for s t d θ R W ( t , x ; θ , ζ ) x Φ ( θ , ζ ; s , y ) d ζ. Using (30), (24) and (23), we obtain (12); in (23) we can set, for example, A = 1. □ The main result of the paper is this theorem. Let Assumptions 1–5 hold. Then there exists the version of the solution of (7) such that for each ω Ω: sup x R | u ( t , x ) | 0 , t . Proof of the auxillary lemma and the main result To prove Theorem 1, we consider the integral R d μ ( y ) 0 t p ( t , x ; s , y ) σ ( s , y ) d s . Assume Assumptions 1, 2, 5 hold. Then there exists a version ν 1 ( t , x ) of integral (32) such that for each ω Ω sup x R | ν 1 ( t , x ) | 0 , t . It follows from Lemma 2 and (6) that a version ν 1 ( t , x ) of integral (32) exists such that for each x R, t 0, ω Ω | ν 1 ( t , x ) | j Z | q ( t , x , j ) μ ( ( j , j + 1 ] ) | + C j Z q ( t , x , · ) B 22 α ( [ j , j + 1 ] ) × n = 1 2 n ( 2 α 1 ) k = 1 2 n | μ ( Δ k n ( j ) ) | 2 1 / 2 ( j Z | q ( t , x , j ) | 2 ) 1 / 2 ( j Z | μ ( ( j , j + 1 ] ) | 2 ) 1 / 2 + C ( j Z q ( t , x , · ) B 22 α ( [ j , j + 1 ] ) 2 ) 1 / 2 ( j Z n = 1 2 n ( 2 α 1 ) k = 1 2 n | μ ( Δ k n ( j ) ) | 2 ) 1 / 2 , where q ( t , x , y ) = 0 t p ( t , x ; s , y ) σ ( s , y ) d s . Next we prove that ν 1 ( t , x ) satisfies (33). In order to estimate the Besov norm on [ j , j + 1 ], we consider | q ( t , x , y + h ) q ( t , x , y ) | 0 t | p ( t , x ; s , y + h ) p ( t , x ; s , y ) | | σ ( s , y ) | d s + 0 t | p ( t , x ; s , y + h ) | | σ ( s , y + h ) σ ( s , y ) | d s = I 1 + I 2 , where y , y + h [ j , j + 1 ]. Denote Ω M N = ( [ s , + ) × R ) ( [ s , s + M ) × ( y N , y + N ) ) , Ω M N γ = { ( t , x ) : d ( Ω M N , ( t , x ) ) γ } , η 1 ( v ) = C e 1 | v | 2 1 1 { | v | < 1 } ; v R 2 , R 2 η 1 ( v ) d v = 1 , η ε ( v ) = ε 2 η 1 ( v ε 1 ) , Θ M N γ = ( Ω M N 2 γ Ω M N ) { t > s } , s\},\end{aligned}\]]]> where M, N, γ > 00]]>. To estimate I 2 we introduce the function p ˜ ( t , x ; s , y ) = p ( t , x ; s , y ) 1 { t > s } Ψ ( t , x ; s , y ) , s\}}}\Psi (t,x;s,y),\]]]> where Ψ ( t , x ; s , y ) = Ω M N γ η γ ( t v 1 , x v 2 ) d v , v = ( v 1 , v 2 ) . It is obvious that, for each 0 < γ < min { M / 2 , N / 2 } and arbitrary fixed pair ( s , y ), p ˜ belongs to a class C 1 , 2 ( [ 0 , + ) × R ) as a function of ( t , x ). It is easy to obtain that p ˜ ( t , x ; s , y ) = p ( t , x ; s , y ) if ( t , x ) Ω M N and p ˜ ( t , x ; s , y ) = 0 if ( t , x ) ( [ 0 , + ) × R ) Ω M N 2 γ { s t }. Moreover, boundedness of p ( t , x ; s , y ) on Ω M N 2 γ { t T } for each T > 00$]]> and the fact that | Ψ | 1 imply boundedness of p ˜ ( t , x ; s , y ) on ( [ 0 , T ] × R ). Now we estimate L p ˜. Taking into consideration properties of p ˜, it is easy to see that L p ˜ = 0 outside the set Θ M N γ . And inside Θ M N γ , L p ˜ = L ( p Ψ ) = Ψ L p + p L Ψ + 2 a p x Ψ x = L p = 0 p L Ψ + 2 a p x Ψ x . For the derivatives of η ε ( v ) we have the following well-known inequalities: | η ε ( v ) v i | C ε 3 , | 2 η ε ( v ) v i 2 | C ε 4 , i = 1 , 2 , where the constant C does not depend on ε. Let us prove, for example, the first inequality in (36): | η ε ( v ) v i | ε 3 max | v | 1 | η 1 ( v ) v i | = C ε 3 , where we used that η 1 C ( R 2 ). From (36) and the fact that η ε = 0 outside the ball with radius ε it follows that | Ψ x | C γ 1 , | Ψ t | C γ 1 , | 2 Ψ x 2 | C γ 2 . Using estimates (35), (10) and (11) we obtain the inequality | L p ˜ | C e ν ( x y ) 2 t s e η ( t s ) ( γ 2 E λ , λ + 1 ( C ˜ ( t s ) λ ) t s + γ 1 E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) t s ) for each ( t , x ) Θ M N γ . Let γ ( 0 , 1 / 3 ) be fixed; consider t > 3 γ3\gamma$]]>. Thus, I 2 can be rewritten in following way: I 2 = 0 t 3 γ | p ( t , x ; s , y + h ) | | σ ( s , y + h ) σ ( s , y ) | d s + t 3 γ t | p ( t , x ; s , y + h ) | | σ ( s , y + h ) σ ( s , y ) | d s = I 21 + I 22 . We estimate the first summand using the function p ˜ ( t , x ; s , y ) for M = N = 3 γ. Note that t s < 3 γ on the set Θ M N γ ; moreover, t s > γ\gamma $]]> or | x y | > γ\gamma$]]>. In the first case we have the following consequence of (37): | L p ˜ | C e 3 η γ γ 5 / 2 ( E λ , λ + 1 ( C ˜ ( 3 γ ) λ ) + E λ , λ + 1 / 2 ( C ˜ ( 3 γ ) λ ) ) . In the second case, | L p ˜ | C e 3 η γ γ 7 / 2 ( E λ , λ + 1 ( C ˜ ( 3 γ ) λ ) + E λ , λ + 1 / 2 ( C ˜ ( 3 γ ) λ ) ) . Anyway, | L p ˜ | C ( γ ) ( t , x ) [ 0 , + ) × R . Using Lemma 3 for arbitrary T > tt$]]>, we obtain | p ˜ ( t , x ; s , y ) | C ( γ ) e c 0 t t , where the constant C does not depend on T. On the other hand, taking into account that p ˜ ( t , x ; s , y ) = p ( t , x ; s , y ) if t s > 3 γ3\gamma$]]> we obtain | I 21 | C ( γ ) h β ( σ ) e c 0 t t 2 0 , t . Now we estimate I 22 . We get | I 22 | C h β ( σ ) t 3 γ t 1 t s e ν ( x y ) 2 t s e η ( t s ) E λ , λ + 1 ( C ˜ ( t s ) λ ) L σ ( s ) d s C h β ( σ ) γ e 3 γ η E λ , λ + 1 ( C ˜ ( 3 γ ) λ ) sup s [ t 3 γ , t ] | L σ ( s ) | 0 , t . For further estimates we use the function p ˆ ( t , x ; s , y , h ) = ( p ( t , x ; s , y + h ) p ( t , x ; s , y ) ) 1 { t > s } Ψ ( t , x ; s , y ) . s\}}}\Psi (t,x;s,y).\]]]> Let M > 2 γ2\gamma $]]>, N > 1 + 2 γ1+2\gamma$]]>. Then function p ˆ has properties, which are analogouos to the properties of p ˜. For example, p ˆ ( t , x ; s , y , h ) = p ( t , x ; s , y + h ) p ( t , x ; s , y ) when ( t , x ) Ω M N , p ˆ is bounded on ( [ 0 , T ] × R ), where T > 00$]]>, p ˆ = 0 when ( t , x ) ( [ 0 , + ) × R ) Ω M N 2 γ { s t }. Now we estimate L p ˆ. Notice that L p ˜ = 0 outside the set Θ M N γ , and for each ( t , x ) Θ M N γ the following estimates hold: L p ˆ = L ( ( p y + h p y ) Ψ ) = Ψ L ( p y + h p y ) + ( p y + h p y ) L Ψ + 2 a ( p y + h p y ) x Ψ x = L p = 0 ( p y + h p y ) L Ψ + 2 a ( p y + h p y ) x Ψ x , where for convenience we denote p y = p ( t , x ; s , y ). (11) and (12) imply that | p ( t , x ; s , y + h ) x p ( t , x ; s , y ) x | C h ϕ ( t s ) 3 / 2 sup y [ j , j + 1 ] e ν ( x y ) 2 t s e η ( t s ) E λ , λ ( C ˜ ( t s ) λ ) ; | p ( t , x ; s , y + h ) p ( t , x ; s , y ) | = | x y h x y p ( t , v ; s , 0 ) v d v | x y h x y 1 t s e ν v 2 t s e η ( t s ) E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) d v h t s sup y [ j , j + 1 ] e ν ( x y ) 2 t s e η ( t s ) E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) . Therefore, for each ( t , x ) Θ M N γ , | L p ˆ | C h ϕ sup y [ j , j + 1 ] e ν ( x y ) 2 t s e η ( t s ) × ( γ 2 E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) t s + γ 1 E λ , λ ( C ˜ ( t s ) λ ) ( t s ) 3 / 2 ) . We rewrite I 1 as I 1 = 0 t 3 γ | p ( t , x ; s , y + h ) p ( t , x ; s , y ) | | σ ( s , y ) | d s + t 3 γ t | p ( t , x ; s , y + h ) p ( t , x ; s , y ) | | σ ( s , y ) | d s = I 11 + I 12 . Consider the function p ˆ ( t , x ; s , y , h ) for M = 3 γ, N = 3 γ + 1. We estimate I 11 analogouosly to I 21 . It follows from (40) and Lemma 3 that | p ˆ ( t , x ; s , y , h ) | C ( γ ) h ϕ e c 0 t t ( t , x ) [ 0 , + ) × R . On the other hand, the equality p ˆ ( t , x ; s , y , h ) = p ( t , x ; s , y + h ) p ( t , x ; s , y ) for t s > 3 γ3\gamma$]]> implies that | I 11 | C ( γ ) h ϕ e c 0 t t 2 0 , t . I 12 is estimated in the following way: | I 12 | C t 3 γ t d s x y h x y C σ ( s ) t s e ν v 2 t s e η ( t s ) E λ , λ + 1 / 2 ( C ˜ ( t s ) λ ) d v C e 3 η γ E λ , λ + 1 / 2 ( C ˜ ( 3 γ ) λ ) sup s [ t 3 γ , t ] | C σ ( s ) | 0 h / 2 v 2 l d v t 3 γ t ( t s ) 1 l d s = C e 3 η γ E λ , λ + 1 / 2 ( C ˜ ( 3 γ ) λ ) sup s [ t 3 γ , t ] | C σ ( s ) | γ 1 l h 1 2 l 0 , t , where l ( 0 , 1 / 2 ). We can choose l, ϕ such that 1 2 l = β ( σ ), ϕ = β ( σ ).

Now assume that for each y [ j , j + 1 ] and some m N the following inequality holds: | x y | m + 1 . Then we consider the functions p ˜ ( t , x ; s , y ) and p ˆ ( t , x ; s , y , h ) with M = 3 γ and N = 3 γ + m. For such M and N, provided that (43) holds, ( t , x ) Ω M N . Moreover, using (37), (40), the fact that (43) implies sup y [ j , j + 1 ] e ν ( x y ) 2 t s e ν 1 m 2 3 γ sup y [ j , j + 1 ] e ( 1 ν 1 ) ( x y ) 2 t s ( t , x ) Θ M N γ , 0 < ν 1 < ν , and Lemma 3, we obtain: | p ( t , x ; s , y ) | C ( γ ) e c 0 t t e ν 1 m 2 3 γ , | p ( t , x ; s , y + h ) p ( t , x ; s , y ) | C ( γ ) h β ( σ ) e c 0 t t e ν 1 m 2 3 γ . Now it is easy to estimate I 1 , I 2 : I 1 C ( γ ) h β ( σ ) e c 0 t t 2 e ν 1 m 2 3 γ C ( γ ) h β ( σ ) e c 0 t t 2 m 1 0 , t , I 2 C ( γ ) h β ( σ ) e c 0 t t 2 e ν 1 m 2 3 γ C ( γ ) h β ( σ ) e c 0 t t 2 m 1 0 , t . Note that we estimate | q ( t , x , y ) | analogously to I 2 . From (38), (39), (41), (42), (44), (45) it follows that there exists G γ ( t ) : [ 0 , + ) [ 0 , + ) such that G γ ( t ) 0, t , and w 2 ( q , r ) G γ ( t ) r β ( σ ) t , r 0 , j Z , x R ; | q ( t , x , j ) | G γ ( t ) t 0 , j Z , x R ; w 2 ( q , r ) G γ ( t ) r β ( σ ) m 1 t , r 0 , j Z , x R : max y [ j , j + 1 ] | x y | m + 1 ; | q ( t , x , j ) | G γ ( t ) m 1 t 0 , j Z , x R : max y [ j , j + 1 ] | x y | m + 1 . From this it follows that for each α ( 1 / 2 , β ( σ ) ) the following inequalities hold: q ( t , x , · ) B 22 α ( [ j , j + 1 ] ) C G γ ( t ) t 0 , j Z , x R ; q ( t , x , · ) B 22 α ( [ j , j + 1 ] ) C G γ ( t ) m 1 t 0 , j Z , x R : max y [ j , j + 1 ] | x y | m + 1 . These estimates imply j Z | q ( t , x , j ) | 2 C G γ 2 ( t ) + C G γ 2 ( t ) m N 1 m 2 = C G γ 2 ( t ) , j Z q ( t , x , · ) B 22 α ( [ j , j + 1 ] ) 2 C G γ 2 ( t ) + C G γ 2 ( t ) m N 1 m 2 = C G γ 2 ( t ) . for each x R. On the other hand, j Z | μ ( ( j , j + 1 ] ) | 2 < a.s., j Z n = 1 k = 1 2 n 2 n ( 2 α 1 ) | μ ( Δ k n ( j ) ) | 2 < a.s. Therefore, for each version that satisfies (34) we have | ν 1 ( t , x ) | C ( ω ) G γ ( t ) . Taking a supremum on x and sending t to infinity, we obtain the statement of the lemma.  □

Now we return to the proof of Theorem 1.

We use the iteration process (9). For each n N we consider the function ν 2 ( n ) ( t , x ) = R p ( t , x ; 0 , y ) u 0 ( y ) d y + 0 t d s R p ( t , x ; s , y ) f ( s , y , u ( n 1 ) ( s , y ) ) d y . From [6, Theorem 2 §4] it follows that the function ν 2 ( n ) is a solution, bounded on [ 0 , T ] × R, of the Cauchy problem L v ( t , x ) = f ( t , x , u ( n 1 ) ( t , x ) ) , v ( 0 , x ) = u 0 ( x ) , for each ω Ω, T > 00\$]]>. Using Lemma 3, we obtain | ν 2 ( n ) ( t , x ) | C e c 0 t ( 1 + t ) . Now from (8) and (46) it follows that | u ( t , x ) | C e c 0 t ( 1 + t ) + sup x R | ν 1 ( t , x ) | . Taking a supremum on x, sending t to infinity and using Lemma 5, we obtain the statement of the theorem.  □

References Bodnarchuk, I.M.: Asymptotic behavior of a mild solution of the stochastic heat equation. Bulletin of Taras Shevchenko National University of Kyiv Series: Physics & Mathematics 36(2), 4042 (2016). Bodnarchuk, I.M.: Regularity of the mild solution of a parabolic equation with stochastic measure. Ukr. Math. J. 69, 118 (2017). MR3631616. https://doi.org/10.1007/s11253-017-1344-4 Bodnarchuk, I.M., Radchenko, V.M.: Asymptotic behavior of solutions of the heat equation with stochastic measure. Scientific Herald of Yuriy Fedkovych Chernivtsi National University 2(1) (2012). Borysenko, O.D., Borysenko, D.O.: Asymptotic behavior of a solution of the non-autonomous logistic stochastic differential equation. Theory Probab. Math. Stat. 101, 3950 (2020). MR4060330. https://doi.org/10.1090/tpms/1110 Friedman, A.: Stochastic Differential Equations and Applications. Academic Press, New York (1975). MR0494490 Ilyin, A.M., Kalashnikov, A.S., Oleynik, O.A.: Linear second-order partial differential equations of the parabolic type. J. Math. Sci. 108(4), 435542 (2002). https://doi.org/10.1023/A:1013156322602 Jiang, Y., Wang, S., Wang, X.: Asymptotics of the solutions to stochastic wave equations driven by a non-Gaussian Lévy process. Acta Math. Sci. 39, 731746 (2019). MR4066502. https://doi.org/10.1007/s10473-019-0307-2 Jrbashian, M.M.: Integral Transformations and Function Representations in Complex Space. Nauka, Moskva (1966). Kamont, A.: A discrete characterization of Besov spaces. Approx. Theory Appl. (N. S.) 13(2), 6377 (1997). MR1750304 Kohatsu-Higa, A., Nualart, D.: Large time asymptotic properties of the stochastic heat equation. J. Theor. Probab. 34(3), 14551473 (2021). MR4289891. https://doi.org/10.1007/s10959-020-01007-y Kwapien, S., Woyczynski, W.A.: Random Series and Stochastic Integrals: Single and Multiple. Birkhauser, Boston (1992). MR1167198. https://doi.org/10.1007/978-1-4612-0425-1 Manikin, B.I.: Averaging principle for the one-dimensional parabolic equation driven by stochastic measure. Mod. Stoch. Theory Appl. 9(2), 123137 (2022). MR4420680. https://doi.org/10.15559/21-vmsta195 Memin, T., Mishura, Y., Valkeia, E.: Inequalities for the moment of Wiener integrals with respect to a fractional Brownian motion. Stat. Probab. Lett. 51, 197206 (2001). MR1822771. https://doi.org/10.1016/S0167-7152(00)00157-7 Radchenko, V.M.: Mild solution of the heat equation with a general stochastic measure. Stud. Math. 194(3), 231251 (2009). MR2539554. https://doi.org/10.4064/sm194-3-2 Radchenko, V.M.: Asymptotic behavior of the solution of heat equation with a stochastic measure as t . Nauk. Visnyk Uzhgorod. Un-tu 23(1), 113118 (2012). Radchenko, V.M.: Evolution equations driven by general stochastic measures in Hilbert space. Theory Probab. Appl. 59(2), 328339 (2015). MR3416054. https://doi.org/10.1137/S0040585X97T987119 Radchenko, V.M.: Averaging principle for equation driven by a stochastic measure. Stochastics 91(6), 905915 (2019). MR3985803. https://doi.org/10.1080/17442508.2018.1559320 Radchenko, V.M., Manikin, B.I.: Approximation of the solution to the parabolic equation driven by stochastic measure. Theory Probab. Math. Stat. 102, 145156 (2020). MR4421340. https://doi.org/10.1090/tpms Samorodnitsky, G., Taqqu, M.: Stable Non-Gaussian Random Processes. Chapman and Hall, London (1994). MR1280932