Open Access

Uncertainty relations for generalized metric adjusted skew information and generalized metric adjusted correlation measure

Journal of Uncertainty Analysis and Applications20131:12

DOI: 10.1186/2195-5468-1-12

Received: 10 September 2013

Accepted: 24 October 2013

Published: 11 November 2013

Abstract

Abstract

In this paper, we give a Heisenberg-type or a Schrödinger-type uncertainty relation for generalized metric adjusted skew information or generalized metric adjusted correlation measure. These results generalize the previous result of Furuichi and Yanagi (J. Math. Anal. Appl. 388:1147-1156, 2012).

AMS

Primary: 15A45, 47A63; secondary: 94A17

Keywords

Trace inequality Metric adjusted skew information Metric adjusted correlation measure

Introduction

We start from the Heisenberg uncertainty relation [1]:
V ρ ( A ) V ρ ( B ) 1 4 | Tr [ ρ [ A , B ] ] | 2
for a quantum state (density operator) ρ and two observables (self-adjoint operators) A and B. The further stronger result was given by Schrödinger in [2], [3]:
V ρ ( A ) V ρ ( B ) | Re Cov ρ ( A , B ) | 2 1 4 | Tr [ ρ [ A , B ] ] | 2 ,

where the covariance is defined by Cov ρ (A,B)≡T r [ ρ(AT r [ ρ A]I)(BT r [ ρ B]I) ].

The Wigner-Yanase skew information represents a measure for non-commutativity between a quantum state ρ and an observable H. Luo introduced the quantity U ρ (H) representing a quantum uncertainty excluding the classical mixture [4]:
U ρ ( H ) V ρ ( H ) 2 V ρ ( H ) I ρ ( H ) 2 ,
with the Wigner-Yanase skew information [5]:
I ρ ( H ) 1 2 Tr ( i [ ρ 1 / 2 , H 0 ] ) 2 = Tr [ ρ H 0 2 ] Tr [ ρ 1 / 2 H 0 ρ 1 / 2 H 0 ] , H 0 H Tr [ ρH ] I ,
and then he successfully showed a new Heisenberg-type uncertainty relation on U ρ (H) in [4]:
U ρ ( A ) U ρ ( B ) 1 4 | Tr [ ρ [ A , B ] ] | 2 .
(1)

As stated in [4], the physical meaning of the quantity U ρ (H) can be interpreted as follows. For a mixed state ρ, the variance V ρ (H) has both classical mixture and quantum uncertainty. Also, the Wigner-Yanase skew information I ρ (H) represents a kind of quantum uncertainty [6, 7]. Thus, the difference V ρ (H)−I ρ (H) has a classical mixture so that we can regard that the quantity U ρ (H) has a quantum uncertainty excluding a classical mixture. Therefore, it is meaningful and suitable to study an uncertainty relation for a mixed state by the use of the quantity U ρ (H).

Recently, a one-parameter extension of the inequality (1) was given in [8]:
U ρ , α ( A ) U ρ , α ( B ) α ( 1 α ) | Tr [ ρ [ A , B ] ] | 2 ,
where
U ρ , α ( H ) V ρ ( H ) 2 V ρ ( H ) I ρ , α ( H ) 2 ,
with the Wigner-Yanase-Dyson skew information I ρ,α (H) defined by
I ρ , α ( H ) 1 2 Tr ( i [ ρ α , H 0 ] ) ( i [ ρ 1 α , H 0 ] ) = Tr [ ρ H 0 2 ] Tr [ ρ α H 0 ρ 1 α H 0 ] .
It is notable that the convexity of I ρ,α (H) with respect to ρ was successfully proven by Lieb in [9]. The further generalization of the Heisenberg-type uncertainty relation on U ρ (H) has been given in [10] using the generalized Wigner-Yanase-Dyson skew information introduced in [11]. Recently, it is shown that these skew informations are connected to special choices of quantum Fisher information in [12]. The family of all quantum Fisher informations is parametrized by a certain class of operator monotone functions F op which were justified in [13]. The Wigner-Yanase skew information and Wigner-Yanase-Dyson skew information are given by the following operator monotone functions:
f WY ( x ) = x + 1 2 2 , f WYD ( x ) = α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) , α ( 0 , 1 ) ,

respectively. In particular, the operator monotonicity of the function f WYD was proved in [14] (see also [15]). On the other hand, the uncertainty relation related to the Wigner-Yanase skew information was given by Luo [4], and the uncertainty relation related to the Wigner-Yanase-Dyson skew information was given by Yanagi [8]. In this paper, we generalize these uncertainty relations to the uncertainty relations related to quantum Fisher informations by using (generalized) metric adjusted skew information or correlation measure.

Operator monotone functions

Let M n ( ) (respectively M n , sa ( ) ) be the set of all n×n complex matrices (respectively all n×n self-adjoint matrices), endowed with the Hilbert-Schmidt scalar product 〈A,B〉=T r (A B). Let M n , + ( ) be the set of strictly positive elements of M n ( ) and M n , + , 1 ( ) be the set of stricly positive density matrices, that is M n , + , 1 ( ) = { ρ M n ( ) | Trρ = 1 , ρ > 0 } . If it is not otherwise specified, from now on, we shall treat the case of faithful states, that is ρ>0.

A function f : ( 0 , + ) is said to be operator monotone if, for any n and A , B M n , + ( ) such that 0≤AB, the inequalities 0≤f(A)≤f(B) hold. An operator monotone function is said to be symmetric if f(x)=x f(x−1) and normalized if f(1)=1.

Definition 1

F op
is the class of functions f:(0,+)→(0,+) such that
  1. 1.

    f(1)=1,

     
  2. 2.

    t f(t −1)=f(t),

     
  3. 3.

    f is operator monotone.

     

Example 1

Examples of elements of F op are given by the following list:
f RLD ( x ) = 2 x x + 1 , f WY ( x ) = x + 1 2 2 , f BKM ( x ) = x 1 log x , f SLD ( x ) = x + 1 2 , f WYD ( x ) = α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) , α ( 0 , 1 ) .

Remark 1

Any f F op satisfies
2 x x + 1 f ( x ) x + 1 2 , x > 0 .
For f F op , define f(0)=lim x→0 f(x). We introduce the sets of regular and non-regular functions
F op r = f F op | f ( 0 ) 0 , F op n = f F op | f ( 0 ) = 0

and notice that trivially F op = F op r F op n .

Definition 2

For f F op r , we set
f ~ ( x ) = 1 2 ( x + 1 ) ( x 1 ) 2 f ( 0 ) f ( x ) , x > 0 .

Theorem 1

([[12] , [16] , [17]]) The correspondence f f ~ is a bijection between F op r and F op n .

Metric adjusted skew information and correlation measure

In the Kubo-Ando theory of matrix means, one associates a mean to each operator monotone function f F op by the formula
m f ( A , B ) = A 1 / 2 f ( A 1 / 2 B A 1 / 2 ) A 1 / 2 ,
where A , B M n , + ( ) . Using the notion of matrix means, one may define the class of monotone metrics (also called quantum Fisher informations) by the following formula:
A , B ρ , f = Tr ( A · m f ( L ρ , R ρ ) 1 ( B ) ) ,

where L ρ (A)=ρ A,R ρ (A)=A ρ. In this case, one has to think of A,B as tangent vectors to the manifold M n , + , 1 ( ) at the point ρ (see [12, 13]).

Definition 3

For A , B M n , sa ( ) and ρ M n , + , 1 ( ) , we define the following quantities:
Corr ρ f ( A , B ) = Tr [ ρAB ] Tr [ A · m f ~ ( L ρ , R ρ ) B ] , Corr ρ s ( f ) ( A , B ) = f ( 0 ) 2 i [ ρ , A ] , i [ ρ , B ] ρ , f , I ρ f ( A ) = Corr ρ f ( A , A ) , C ρ f ( A , B ) = Tr [ A · m f ( L ρ , R ρ ) B ] , C ρ f ( A ) = C ρ f ( A , A ) , U ρ f ( A ) = V ρ ( A ) 2 ( V ρ ( A ) I ρ f ( A ) ) 2 ,

The quantity I ρ f ( A ) is known as metric adjusted skew information [18], and the metric adjusted correlation measure Corr ρ f ( A , B ) was also previously defined in [18].

Then we have the following proposition.

Proposition 1

([16, 19]) For A , B M n , sa ( ) and ρ M n , + , 1 ( ) , we have the following relations, where we put A 0=AT r [ ρ A]I and B 0=BT r [ ρ B]I:
  1. 1.

    I ρ f ( A ) = I ρ f ( A 0 ) = Tr [ ρ A 0 2 ] Tr [ A 0 · m f ~ ( L ρ , R ρ ) A ] = V ρ ( A ) C ρ f ~ ( A 0 ) ,

     
  2. 2.

    J ρ f ( A ) = Tr [ ρ A 0 2 ] + Tr [ A 0 · m f ~ ( L ρ , R ρ ) A 0 ] = V ρ ( A ) + C ρ f ~ ( A 0 ) ,

     
  3. 3.

    0 I ρ f ( A ) U ρ f ( A ) V ρ ( A ) ,

     
  4. 4.

    U ρ f ( A ) = I ρ f ( A ) · J ρ f ( A ) ,

     
  5. 5.

    Corr ρ f ( A , B ) = Corr ρ f ( A 0 , B 0 ) = Tr [ ρ A 0 B 0 ] Tr [ A 0 · m f ~ ( L ρ , R ρ ) B 0 ] ,

     
  6. 6.

    Corr ρ s ( f ) ( A , B ) = Corr ρ s ( f ) ( A 0 , B 0 )

     

= 1 2 Tr [ ρ A 0 B 0 ] + 1 2 Tr [ ρ B 0 A 0 ] Tr [ A 0 · m f ~ ( L ρ , R ρ ) B 0 ]

= 1 2 Tr [ ρ A 0 B 0 ] + 1 2 Tr [ ρ B 0 A 0 ] C ρ f ~ ( A 0 , B 0 ) .

Now we modify the uncertainty relation given by [20].

Theorem 2

For f F op r , it holds
I ρ f ( A ) · I ρ f ( B ) | Cor r ρ s ( f ) ( A , B ) | 2 ,

where A , B M n , sa ( ) and ρ M n , + , 1 ( ) .

Remark 2

Since Theorem 2 is easily given by using the Schwarz inequality, we omit the proof. In [20] we gave the uncertainty relation
U ρ f ( A ) · U ρ f ( B ) 4 f ( 0 ) | Corr ρ s ( f ) ( A , B ) | 2 .

But since 4 f(0)≤1 and I ρ f ( A ) U ρ f ( A ) , it is easily given by Theorem 2.

Theorem 3

([20, 21]) For f F op r ,if
x + 1 2 + f ~ ( x ) 2 f ( x ) ,
(2)
then it holds
U ρ f ( A ) · U ρ f ( B ) f ( 0 ) | Tr ( ρ [ A , B ] ) | 2 ,
(3)
U ρ f ( A ) · U ρ f ( B ) 4 f ( 0 ) | Corr ρ f ( A , B ) | 2 ,
(4)

where A , B M n , sa ( ) and ρ M n , + , 1 ( ) .

Remark 3

Though we cannot use the Schwarz inequality, we can get (4) in Theorem 3 by modifying the proof given by [20].

By putting
f WYD ( x ) = α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) , α ( 0 , 1 ) ,

we obtain the following uncertainty relation.

Corollary 1

For A , B M n , sa ( ) and ρ M n , + , 1 ( ) ,
U ρ f WYD ( A ) · U ρ f WYD ( B ) α ( 1 α ) | Tr ( ρ [ A , B ] ) | 2 , U ρ f WYD ( A ) · U ρ f WYD ( B ) 4 α ( 1 α ) | Corr ρ f WYD ( A , B ) | 2 ,
where
Corr ρ f WYD ( A , B ) = Tr [ ρ A 0 B 0 ] 1 2 Tr [ ρ α A 0 ρ 1 α B 0 ] 1 2 Tr [ ρ α B 0 ρ 1 α A 0 ] .

Remark 4

Even if (2) does not necessarily hold, then
U ρ f ( A ) · U ρ f ( B ) f ( 0 ) 2 | Tr [ ( ρ [ A , B ] ) | 2 ,
(5)
U ρ f ( A ) · U ρ f ( B ) 4 f ( 0 ) 2 | Corr ρ f ( A , B ) | 2 ,
(6)

where A , B M n , sa ( ) and ρ M n , + , 1 ( ) . Since f(0)<1, it is easy to show that (5) and (6) are weaker than (3) and (4), respectively.

Generalized metric adjusted skew information and correlation measure

We give some generalizations of Heisenberg or Schrd̈inger uncertainty relations which include Theorem 3 as corollary.

Definition 4

([22]) Let g , f F op r satisfy
g ( x ) k ( x 1 ) 2 f ( x )
for some k>0. We define
Δ g f ( x ) = g ( x ) k ( x 1 ) 2 f ( x ) F op .
(7)

Definition 5

For A , B M n , sa ( ) and ρ M n , + , 1 ( ) , we define the following quantities:
Corr ρ s ( g , f ) ( A , B ) = k i [ ρ , A ] , i [ ρ , B ] ρ , f , I ρ ( g , f ) ( A ) = Corr ρ s ( g , f ) ( A , A ) , C ρ f ( A , B ) = Tr [ A · m f ( L ρ , R ρ ) B ] , C ρ f ( A ) = C ρ f ( A , A ) , U ρ ( g , f ) ( A ) = ( C ρ g ( A ) + C ρ Δ g f ( A ) ) ( C ρ g ( A ) C ρ Δ g f ( A ) ) .

The quantity I ρ ( g , f ) ( A ) and Corr ρ s ( g , f ) ( A , B ) are said to be generalized metric adjusted skew information and generalized metric adjusted correlation measure, respectively.

Then we have the following proposition.

Proposition 2

For A , B M n , sa ( ) and ρ M n , + , 1 ( ) , we have the following relations, where we put A 0=AT r [ ρ A]I and B 0=BT r [ ρ B]I:
  1. 1.

    I ρ ( g , f ) ( A ) = I ρ ( g , f ) ( A 0 ) = C ρ g ( A 0 ) C ρ Δ g f ( A 0 ) ,

     
  2. 2.

    J ρ ( g , f ) ( A ) = C ρ g ( A 0 ) + C ρ Δ g f ( A 0 ) ,

     
  3. 3.

    U ρ ( g , f ) ( A ) = I ρ ( g , , f ) ( A ) · J ρ ( g , f ) ( A ) ,

     
  4. 4.

    Cor r ρ s ( g , f ) ( A , B ) = Cor r ρ s ( g , f ) ( A 0 , B 0 ) .

     

Theorem 4

For f F op r , it holds
I ρ ( g , f ) ( A ) · I ρ ( g , f ) ( B ) | Corr ρ s ( g , f ) ( A , B ) | 2 ,

where A , B M n , sa ( C ) and ρ M n , + , 1 ( ) .

Proof of Theorem 4. We define for X , Y M n ( )
Corr ρ s ( g , f ) ( X , Y ) = k i [ ρ , X ] , i [ ρ , Y ] ρ , f .
Since
Corr ρ s ( g , f ) ( X , Y ) = kTr ( ( i [ ρ , X ] ) m f ( L ρ , R ρ ) 1 i [ ρ , Y ] ) = kTr ( ( i ( L ρ R ρ ) X ) m f ( L ρ , R ρ ) 1 i ( L ρ R ρ ) Y ) = Tr ( X m g ( L ρ , R ρ ) Y ) Tr ( X m Δ g f ( L ρ , R ρ ) Y ) ,

it is easy to show that Corr ρ s ( g , f ) ( X , Y ) is an inner product in M n ( ) . Then we can get the result by using the Schwarz inequality.

Theorem 5

For f F op r , if
g ( x ) + Δ g f ( x ) ℓf ( x )
(8)
for some >0, then it holds
U ρ ( g , f ) ( A ) · U ρ ( g , f ) ( B ) kℓ | Tr ( ρ [ A , B ] ) | 2 ,
(9)

where A , B M n , sa ( ) and ρ M n , + , 1 ( ) .

In order to prove Theorem 5, we need the following lemmas.

Lemma 1

If (7) and (8) are satisfied, then we have the following inequality:
m g ( x , y ) 2 m Δ g f ( x , y ) 2 kℓ ( x y ) 2 .

Proof of Lemma 1

By (7) and (8), we have
m Δ g f ( x , y ) = m g ( x , y ) k ( x y ) 2 m f ( x , y ) ,
(10)
m g ( x , y ) + m Δ g f ( x , y ) m f ( x , y ) .
(11)
Therefore, by (10) and (11),
m g ( x , y ) 2 m Δ g f ( x , y ) 2 = m g ( x , y ) m Δ g f ( x , y ) m g ( x , y ) + m Δ g f ( x , y ) k ( x y ) 2 m f ( x , y ) m f ( x , y ) = kℓ ( x y ) 2 .

We have the following expressions for the quantities I ρ ( g , f ) ( A ) , J ρ ( g , f ) ( A ) , U ρ ( g , f ) ( A ) , and Corr ρ s ( g , f ) ( A , B ) by using Proposition 2 and a mean m Δ g f .

Lemma 2

Let {|ϕ 1〉,|ϕ 2〉,…,|ϕ n 〉} be a basis of eigenvectors of ρ, corresponding to the eigenvalues {λ 1,λ 2,…,λ n }. We put a jk =〈ϕ j |A 0|ϕ k 〉,b jk =〈ϕ j |B 0|ϕ k 〉, where A 0AT r [ ρ A]I and B 0BT r [ ρ B]I for A , B M n , sa ( ) and ρ M n , + , 1 ( ) . Then we have
I ρ ( g , f ) ( A ) = j , k m g ( λ j , λ k ) a jk a kj j , k m Δ g f ( λ j , λ k ) a jk a kj = 2 j < k ( m g ( λ j , λ k ) m Δ g f ( λ j , λ k ) | a jk | 2 , J ρ ( g , f ) ( A ) = j , k m g ( λ j , λ k ) a jk a kj + j , k m Δ g f ( λ j , λ k ) a jk a kj 2 j < k m g ( λ j , λ k ) + m Δ g f ( λ j , λ k ) | a jk | 2 , U ρ ( g , f ) ( A ) 2 = j , k m g ( λ j , λ k ) | a jk | 2 2 j , k m Δ g f ( λ j , λ k ) | a jk | 2 2 ,
and
Cor r ρ s ( g , f ) ( A , B ) = j , k m g ( λ j , λ k ) a jk b kj j , k m Δ g f ( λ j , λ k ) a jk b kj = j < k m g ( λ j , λ k ) m Δ g f ( λ j , λ k ) a jk b kj + j < k m g ( λ k , λ j ) m Δ g f ( λ k , λ j ) a kj b jk .

We are now in a position to prove Theorem 5.

Proof of Theorem 5. At first we prove (9). Since
Tr ( ρ [ A , B ] ) = j , k ( λ j λ k ) a jk b kj , | Tr ( ρ [ A , B ] ) | j , k | λ j λ k | | a jk | | b kj | .
Then by Lemma 1, we have
kℓ | Tr ( ρ [ A , B ] ) | 2 j , k kℓ | λ j λ k | | a jk | | b kj | 2 j , k m g ( λ j , λ k ) 2 m Δ g f ( λ j , λ k ) 2 1 / 2 | a jk | | b kj | 2 j , k m g ( λ j , λ k ) m Δ g f ( λ j u , λ k ) | a jk | 2 j , k m g ( λ j , λ k ) + m Δ g f ( λ j , λ k ) | b kj | 2 = I ρ ( g , f ) ( A ) J ρ ( g , f ) ( B ) .
By a similar way, we also have
I ρ ( g , f ) ( B ) J ρ ( g , f ) ( A ) kℓ | Tr ( ρ [ A , B ] ) | 2 .

Hence, we have the desired inequality (9). □

We give some examples satisfying the condition (8).

Example 2

Let
g ( x ) = x + 1 2 , f ( x ) = α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) , α ( 0 , 1 ) , k = f ( 0 ) 2 = α ( 1 α ) 2 , = 2 .
Then
g ( x ) + Δ g f ( x ) 2 f ( x ) .
Proof of Example 2. In [10, 21] we give
( x 2 α 1 ) ( x 2 ( 1 α ) 1 ) 4 α ( 1 α ) ( x 1 ) 2
for x>0 and 0≤α≤1. Then we have
g ( x ) + Δ g f ( x ) 2 f ( x ) .

Example 3

Let
g ( x ) = x + 1 2 2 , f ( x ) = α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) , α ( 0 , 1 ) , k = f ( 0 ) 8 = α ( 1 α ) 8 , = 3 2 .
Then
g ( x ) + Δ g f ( x ) 3 2 f ( x )

holds for 0<α<1.

Proof of Example 3. Since
1 2 1 + x 2 2 1 8 ( x α 1 ) ( x 1 α 1 ) = 1 8 ( x + 2 x + 1 x 1 + x α + x 1 α ) = 1 8 ( 2 x + x α + x 1 α ) = 1 8 ( x α / 2 + x ( 1 α ) / 2 ) 2 0 ,
we have
2 1 + x 2 2 1 8 ( x α 1 ) ( x 1 α 1 ) + 3 2 1 + x 2 2 .
Since
α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) 1 + x 2 2 ,
we have
2 1 + x 2 2 1 8 ( x α 1 ) ( x 1 α 1 ) + 3 2 α ( 1 α ) ( x 1 ) 2 ( x α 1 ) ( x 1 α 1 ) .
Then we have
g ( x ) + Δ g f ( x ) 3 2 f ( x )

Example 4

Let
g ( x ) = x γ + 1 2 1 / γ ( 3 4 γ 1 ) , f ( x ) = x + 1 2 2 , k = f ( 0 ) 4 = 1 16 , = 2 .

Then g ( x ) + Δ g f ( x ) 2 f ( x ) .

In order to prove Example 4, we need the following lemma.

Lemma 3

For x>0, we set the function of y as
F ( y ) 1 + x y 2 1 / y .
Then F(y) has following properties:
  1. 1.

    F(y) is monotone increasing for y .

     
  2. 2.

    F(y) is convex for y<0.

     
  3. 3.

    F(y) is concave for y≥1/2.

     

We give the proof of Lemma 3 in the Appendix.

Proof of Example 4. By Lemma 3,
2 1 + x 3 / 4 2 4 / 3 1 + x 2 + 1 + x 2 2 .
It follows from the monotonicity that
1 + x y 2 1 / y 1 + x 3 / 4 2 4 / 3
for y [ 3/4,1]. Then
2 1 + x y 2 1 / y 1 + x 2 + 1 + x 2 2
for y[ 3/4,1]. Therefore, we have
2 1 + x y 2 1 / y x 1 2 2 2 x + 1 2 2 .
Hence, we have
g ( x ) + Δ g f ( x ) 2 f ( x ) .

Appendix

Proof of Lemma 3.
  1. (i)
    Since F(y)>0 for x>0 and t , it is sufficient to prove d dy log F ( y ) > 0 for the proof of F (y)>0. We have
    d dy log F y = 1 y 2 log 2 + x y log x y 1 + x y log 1 + x y .
     
Then we put
G r r + 1 log 2 + r log r r + 1 log r + 1 , r > 0 ,
where we put x y r>0. From elementary calculations, we have G(r)≥G(1)=0 which implies d dy log F y > 0 .
  1. (ii)
    We firstly set f(y)≡ logF(y). Since F(y)>0, we have only to prove f ′′(y)>0 for the proof of F ′′(y)>0. We set again g y 1 + x y 2 , x > 0 , y < 0 . Then we have d 2 d y 2 log g y x y log x 2 1 + x y 2 > 0 . In addition, by f y = 1 y log g y , we have
    f y = 1 y g y g y 1 y 2 log g y > 0 .
     
By d 2 d y 2 log g y = g y g ′′ y g y 2 g y 2 , we have
f ′′ y = 1 y g y g ′′ y g y 2 g y 2 2 y 2 g y g y + 2 y 3 log g y = 1 y d 2 d y 2 log g y 2 y f y .
We prove f′′(y)>0 for y<0. We calculate
f ′′ y = 1 y x y log x 2 1 + x y 2 2 y 1 y 2 log 2 + x y log x y 1 + x y log 1 + x y = 1 y 3 1 + x y 2 2 x y 1 + x y log x y + x y log x y 2 + 2 1 + x y 2 log 1 + x y 2 .
Thus, if we put
h y 2 x y 1 + x y log x y + x y log x y 2 + 2 1 + x y 2 log 1 + x y 2 ,
then we have only to prove h(y)<0 for y<0. Since we have h(0)=0, we have only to prove h(y)>0 for y<0. Here we have
h y = x y log x 4 x y log x y log x y 2 4 1 + x y log 1 + x y 2 .
If we set again
l t 4 t log t log t 2 4 t + 1 log t + 1 2 ,
where we put x y t>0, then we prove the following cases:
  1. (a)

    If x<1(i.e., t>1), then l(t)>0.

     
  2. (b)

    If x>1(i.e., 0<t<1), then l(t)<0.

     
For case (a), we calculate
l t = 1 t 4 t log 2 + 4 t 2 log t 4 t log t + 1
and
l ′′ t = 2 t + 1 log t + t 1 t 2 t + 1 > 0 , t > 1 .
Thus, we have l(t)≥l(1)=0, and then we have l(t)≥l(1)=0. For case (b), we easily find that
l ′′ t = 2 t + 1 log t + t 1 t 2 t + 1 < 0 , 0 < t < 1 .
Thus, we have l(t)≥l(1)=0, and then we have l(t)≤l(1)=0.
  1. (iii)
    We calculate
    d 2 d y 2 F ( y ) = 1 y 4 1 + x y 2 1 / y h ( x , y ) ,
     
where
h ( x , y ) = ( log 2 2 y ) log 2 + 2 log 2 1 + x y { x y log x y ( 1 + x y ) log ( 1 + x y ) } + 1 ( 1 + x y ) 2 { x y y 2 ( x y + y ) ( log x ) 2 } 1 ( 1 + x y ) 2 { 2 x y ( 1 + x y ) ( y + log ( 1 + x y ) ) log x y } + { 2 y + log ( 1 + x y ) } log ( 1 + x y ) .
We prove h(x,y)≤0 for x>0 and y≥1/2. Then we have
dh x , y dx = x 1 + y y 2 log x ( 1 + x y ) 3 ( x y ( y 2 ) y ) log x y + 2 ( 1 + x y ) log 1 + x y 2 .
Here we note that dh 1 , y dx = 0 . We also put
g x , y = x y ( 2 + y ) y log x y + 2 ( 1 + x y ) log 1 + x y 2 .
If we have g(x,y)≥0 for x>0 and y≥1/2, then we have dh x , y dx 0 for 0<x≤1 and dh x , y dx 0 for x≥1. Thus, we then obtain h(x,y)≤h(1,y)=0 for y≥1/2, due to dh 1 , y dx = 0 . Therefore, we have only to prove g(x,y)≥0 for x>0 and y≥1/2.
  1. (a)
    For the case 0<x≤1, we have
    dg x , y dx = y x y ( x y 1 ) + ( y 2 ) x y log x y + 2 x y log x y + 1 2 .
     
Since g(1,y)=0, if we prove dg ( x , y ) dx 0 , then we can prove g(x,y)≥g(1,y)=0 for y≥1/2 and 0<x≤1. Since we have the relations
x 1 x log x 2 x 1 x + 1 0
for 0<x≤1, we calculate
y x y 1 + y 2 x y log x y + 2 x y log x y + 1 2 y x y 1 + y 2 x y x y 1 x y / 2 + 2 x y 2 x y + 1 2 1 x y + 1 2 + 1 = x y 1 x y + 3 3 y 2 x y / 2 + y 2 x 3 y / 2 + 3 y + y + 4 x y .
Thus, we have only to prove
k ( y ) 3 y 2 x y / 2 + y 2 x 3 y / 2 + 3 y + y + 4 x y 0
for 0<x≤1 and y≥1/2. Since it is trivial k(y)≥0 for y≥2, we assume 1/2≤y<2 from here. To this end, we prove that k 1(y)≡3(y−2)x y/2 +(y−2)x3y/2 is monotone increasing for 1/2≤y<2 and k 2(y)≡3y +(y+4)x y is also monotone increasing for 1/2≤y<2. We easily find that
d k 1 y dy = 1 2 x y / 2 2 ( x y + 3 ) + 3 ( x y + 1 ) ( y 2 ) log x > 0 ,

for 0<x≤1 and 1/2≤y<2.

We also have
d k 2 ( y ) dy = x y + 3 + ( y + 4 ) x y log x.
Here we prove d k 2 ( y ) dy 0 for 0<x≤1 and 1/2≤y<2. We put again
k 3 x x y + 3 + ( y + 4 ) x y log x ,
then we have
d k 3 x dx = x 1 + y 2 ( y + 2 ) + y ( y + 4 ) log x .
Thus, we have
d k 3 ( x ) dx = 0 x = e 2 ( y + 2 ) y ( y + 4 ) α y .
Since d k 3 ( x ) dx < 0 for 0<x<α y and d k 3 ( x ) dx > 0 for α y <x≤1, we have
k 3 x k 3 α y = 3 ( y + 4 ) e 2 ( y + 2 ) y + 4 y k 4 y .
Since we have d k 4 y dy = 8 ( y + 2 ) e 2 ( y + 2 ) y + 4 y 2 ( y + 4 ) > 0 , the function k 4(y) is monotone increasing for y. Thus, we have
k 3 x k 3 α y = 3 ( y + 4 ) e 2 ( y + 2 ) y + 4 y k 4 y k 4 1 / 2 = 3 9 e 10 / 9 > 0
since e10/93.03773. Therefore, k 2(y) is also a monotone increasing function of y for 0<x≤1 and 1/2≤y<2. Thus, k(y) is monotone increasing for y≥1/2, and then we have
k ( y ) k ( 1 / 2 ) = 3 2 x 1 / 4 1 3 0 .
  1. (b)
    For the case x ≥ 1, we firstly calculate
    dg x , y dy = x y 1 log x y + y x y 1 + y 2 x y log x y + 2 x y log 1 + x y 2 log x.
     
We put
p x , y ( x y 1 ) y + x y ( y 2 ) log x y + 2 x y log 1 + x y 2 .
Then we calculate
dp x , y dx = y x + x 1 y ( 1 + x y ) ( y 2 ) log x y + 2 y ( 1 + x y ) 1 + ( 1 + x y ) log 1 + x y 2 .
Then we put
q x , y = ( y 2 ) log x y + 2 log 1 + x y 2 + 2 y 2 1 + x y .
We have
dq x , y dy = ( ( 1 + x y ) 2 y 2 ) log x + ( 1 + x y ) 2 ( log x y + 2 ) ( 1 + x y ) 2 > 0
and then
q x , y q x , 1 / 2 = 1 2 x + 1 + 2 log 1 + x 2 3 4 log x.
Since we find
dq x , 1 / 2 dx = ( x + 3 ) ( x 1 ) 4 x ( x + 1 ) 2 0
for x≥1, we have q(x,y)≥q(x,1/2)≥q(1,1/2)=0. Therefore, we have dp x , y dx 0 , which implies p(x,y)≥p(1,y)=0. Thus, we have dg x , y dy 0 , and then we have g(x,y)≥g(x,1/2), where
g ( x , 1 / 2 ) = 1 2 ( 3 x 1 / 2 + 1 ) log x 1 / 2 + 2 ( x 1 / 2 + 1 ) log x 1 / 2 + 1 2 .
To prove g(x,1/2)≥0 for x≥1 and y≥1/2, we put x1/2z≥1 and
r ( z ) 1 2 ( 3 z + 1 ) log z + 2 ( z + 1 ) log z + 1 2 .
Since we have r ′′ ( z ) = ( z 1 ) 2 2 z 2 ( z + 1 ) 0 and
r ( z ) = 1 2 z z 1 3 z log z + 4 z log z + 1 2 ,

we have r(1)=0 and then we have r(z)≥0 for z≥1. Thus, we have r(z)≥0 for z≥1 by r(1)=0. Finally, we have g(x,y)≥g(x,1/2)≥0, for x≥1 and y≥1/2.

Declarations

Acknowledgements

The first author (KY) was partially supported by JSPS KAKENHI Grant Number 23540208. The second author (SF) was partially supported by JSPS KAKENHI Grant Number 24540146.

Authors’ Affiliations

(1)
Graduate School of Science and Engineering, Yamaguchi University
(2)
College of Humanities and Science, Nihon University
(3)
Faculty of Education, Bukkyo University

References

  1. Heisenberg W: Uber den anschaulichen Inhalt der quantummechanischen Kinematik und Mechanik. Zeitschrift für Physik 1927, 43: 172–198. 10.1007/BF01397280View ArticleGoogle Scholar
  2. Robertson HP: The uncertainty principle. Phys. Rev 1929, 34: 163–164. 10.1103/PhysRev.34.163View ArticleGoogle Scholar
  3. Schrödinger E: About Heisenberg uncertainty relation. Proc. Prussian Acad. Sci. Phys. Math 1930, XIX: 293.Google Scholar
  4. Luo S: Heisenberg uncertainty relation for mixed states. Phys. Rev. A 2005, 72: 042110.View ArticleGoogle Scholar
  5. Wigner EP, Yanase MM: Information content of distribution. Proc. Nat. Acad. Sci 1963, 49: 910–918. 10.1073/pnas.49.6.910MathSciNetView ArticleGoogle Scholar
  6. Luo S, Zhang Q: Informational distance on quantum-state space. Phys. Rev. A 2004, 69: 032106.MathSciNetView ArticleGoogle Scholar
  7. Luo S: Quantum versus classical uncertainty. Theor. Math. Phys 2005, 143: 681–688. 10.1007/s11232-005-0098-6View ArticleGoogle Scholar
  8. Yanagi K: Uncertainty relation on Wigner-Yanase-Dyson skew information. J. Math. Anal. Appl 2010, 365: 12–18. 10.1016/j.jmaa.2009.09.060MathSciNetView ArticleGoogle Scholar
  9. Lieb EH: Convex trace functions and the Wigner-Yanase-Dyson conjecture. Adv. Math 1973, 11: 267–288. 10.1016/0001-8708(73)90011-XMathSciNetView ArticleGoogle Scholar
  10. Yanagi K: Uncertainty relation on generalized Wigner-Yanase-Dyson skew information. Linear Algebra Appl 2010, 433: 1524–1532. 10.1016/j.laa.2010.05.024MathSciNetView ArticleGoogle Scholar
  11. Cai L, Luo S: On convexity of generalized Wigner-Yanase-Dyson information. Lett. Math. Phys 2008, 83: 253–264. 10.1007/s11005-008-0222-2MathSciNetView ArticleGoogle Scholar
  12. Gibilisco P, Hansen F, Isola T: On a correspondence between regular and non-regular operator monotone functions. Linear Algebra Appl 2009, 430: 2225–2232. 10.1016/j.laa.2008.11.022MathSciNetView ArticleGoogle Scholar
  13. Petz D: Monotone metrics on matrix spaces. Linear Algebra Appl 1996, 244: 81–96.MathSciNetView ArticleGoogle Scholar
  14. Petz D, Hasegawa H: On the Riemannian metric of α-entropies of density matrices. Lett. Math. Phys 1996, 38: 221–225. 10.1007/BF00398324MathSciNetView ArticleGoogle Scholar
  15. Furuta T: Elementary proof of Petz-Hasegawa theorem. Lett. Math. Phys 2012, 101: 355–359. 10.1007/s11005-012-0568-3MathSciNetView ArticleGoogle Scholar
  16. Gibilisco P, Imparato D, Isola T: Uncertainty principle and quantum Fisher information, II. J. Math. Phys 2007, 48: 072109. 10.1063/1.2748210MathSciNetView ArticleGoogle Scholar
  17. Kubo F, Ando T: Means of positive linear operators. Math. Ann 1980, 246: 205–224. 10.1007/BF01371042MathSciNetView ArticleGoogle Scholar
  18. Hansen F: Metric adjusted skew information. Proc. Nat. Acad. Sci 2008, 105: 9909–9916. 10.1073/pnas.0803323105View ArticleGoogle Scholar
  19. Gibilisco P, Isola T: On a refinement of Heisenberg uncertainty relation by means of quantum Fisher information. J. Math. Anal. Appl 2011, 375: 270–275. 10.1016/j.jmaa.2010.09.029MathSciNetView ArticleGoogle Scholar
  20. Furuichi S, Yanagi K: Schrödinger uncertainty relation, Wigner-Yanase-Dyson skew information and metric adjusted correlation measure. J. Math. Anal. Appl 2012, 388: 1147–1156. 10.1016/j.jmaa.2011.10.061MathSciNetView ArticleGoogle Scholar
  21. Yanagi K: Metric adjusted skew information and uncertainty relation. J. Math. Anal. Appl 2011, 380: 888–892. 10.1016/j.jmaa.2011.03.068MathSciNetView ArticleGoogle Scholar
  22. Gibilisco P, Hiai F, Petz D: Quantum covariance, quantum Fisher information, and the uncertainty relations. IEEE Trans. Inf. Theory 2009, 55: 439–443.MathSciNetView ArticleGoogle Scholar

Copyright

© Yanagi et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.