Tensor trains
Table of Contents
Basics
Multiplication
Tensor train operator times tensor train vector
To multiply a tensor train operator (TToperator
) by a tensor train vector (TTvector
), use the *
operator.
using TensorTrainNumerics
# Define the dimensions and ranks for the TTvector
dims = (2, 2, 2)
rks = [1, 2, 2, 1]
# Create a random TTvector
tt_vec = rand_tt(dims, rks)
# Define the dimensions and ranks for the TToperator
op_dims = (2, 2, 2)
op_rks = [1, 2, 2, 1]
# Create a random TToperator
tt_op = rand_tto(op_dims, 3)
# Perform the multiplication
result = tt_op * tt_vec
# Visualize the result
visualize(result)
1-- • -- 4-- • -- 4-- • -- 1
| | |
2 2 2
Tensor train operator times tensor train operator
To multiply two tensor train operators, use the *
operator.
# Create another random TToperator
tt_op2 = rand_tto(op_dims, 3)
# Perform the multiplication
result_op = tt_op * tt_op2
# Visualize the result
visualize(result_op)
2 2 2
| | |
1-- • --4-- • --4-- • --1
| | |
2 2 2
Addition
To add two tensor train vectors or operators, use the +
operator.
# Create another random TTvector
tt_vec2 = rand_tt(dims, rks)
# Perform the addition
result_add = tt_vec + tt_vec2
# Visualize the result
visualize(result_add)
1-- • -- 4-- • -- 4-- • -- 1
| | |
2 2 2
Concatenation
To concatenate two tensor train vectors or operators, use the concatenate
function.
# Concatenate two TTvectors
result_concat = concatenate(tt_vec, tt_vec2)
# Visualize the result
visualize(result_concat)
1-- • -- 2-- • -- 2-- • -- 1-- • -- 2-- • -- 2-- • -- 1
| | | | | |
2 2 2 2 2 2
Matricization
To convert a tensor train vector or operator into its matrix form, use the matricize
function.
# Matricize the TTvector
result_matrix = matricize(tt_vec)
# Print the result
println(result_matrix)
[-2.432628070253386, 0.8401351830546573, 4.702879865485085, 2.3256623814608166, -1.0684539978532288, 0.25141819010350874, 2.208175065489419, 1.0339540208706417]
Visualization
To visualize a tensor train vector or operator, use the visualize
function.
# Visualize the TTvector
visualize(tt_vec)
1-- • -- 2-- • -- 2-- • -- 1
| | |
2 2 2
Tensor Train Decomposition
The ttv_decomp
function performs a tensor train decomposition on a given tensor.
using TensorTrainNumerics
# Define a 3-dimensional tensor
tensor = rand(2, 3, 4)
# Perform the tensor train decomposition
ttv = ttv_decomp(tensor)
# Print the TTvector ranks
println(ttv.ttv_rks)
[1, 2, 4, 1]
Explanation
The ttv_decomp
function takes a tensor as input and returns its tensor train decomposition in the form of a TTvector
. The decomposition is performed using the Hierarchical SVD algorithm, which decomposes the tensor into a series of smaller tensors (cores) connected by ranks.
Example with Tolerance
using TensorTrainNumerics
# Define a 3-dimensional tensor
tensor = rand(2, 3, 4)
# Perform the tensor train decomposition with a custom tolerance
ttv = ttv_decomp(tensor, tol=1e-10)
# Print the TTvector ranks
println(ttv.ttv_rks)
[1, 2, 4, 1]
Optimization
ALS
using TensorTrainNumerics
# Define the dimensions and ranks for the TTvector
dims = (2, 2, 2)
rks = [1, 2, 2, 1]
# Create a random TTvector for the initial guess
tt_start = rand_tt(dims, rks)
# Create a random TToperator for the matrix A
A_dims = (2, 2, 2)
A_rks = [1, 2, 2, 1]
A = rand_tto(A_dims, 3)
# Create a random TTvector for the right-hand side b
b = rand_tt(dims, rks)
# Solve the linear system Ax = b using the ALS algorithm
tt_opt = als_linsolve(A, b, tt_start; sweep_count=2)
# Print the optimized TTvector
println(tt_opt)
# Define the sweep schedule and rank schedule for the eigenvalue problem
sweep_schedule = [2, 4]
rmax_schedule = [2, 3]
# Solve the eigenvalue problem using the ALS algorithm
eigenvalues, tt_eigvec = als_eigsolve(A, tt_start; sweep_schedule=sweep_schedule, rmax_schedule=rmax_schedule)
# Print the lowest eigenvalue and the corresponding eigenvector
println("Lowest eigenvalue: ", eigenvalues[end])
println("Corresponding eigenvector: ", tt_eigvec)
TTvector{Float64, 3}(3, [[2.230914221382358; -2.1802367558685782;;; -0.9565125652102084; -2.173670742791006], [-0.5959679470805677 -0.6700843691170483; 0.6711348316124702 -0.21662345794560744;;; 0.43614712349251794 -0.6306182480247681; -0.06462144008204068 -0.3261622924111045], [-0.9831584023035564 -0.18275545403603766; -0.18275545403603766 0.9831584023035564;;;]], (2, 2, 2), [1, 2, 2, 1], [0, 1, 1])
Lowest eigenvalue: -13.983802683350744
Corresponding eigenvector: TTvector{Float64, 3}(3, [[-0.9147214506996872; -0.39916826813229145;;; 0.026826930705093933; -0.056830248416407686], [-0.45649254013450413 -0.6218823683511375; 0.759093456442205 -0.1047584968063826;;; -0.24641581504723753 -0.307736418106673; -0.39328225396371447 0.712450892519423], [-0.6443489271178209 -0.7647316262075954; -0.7647316262075954 0.6443489271178208;;;]], (2, 2, 2), [1, 2, 2, 1], [0, 1, 1])
MALS
using TensorTrainNumerics
dims = (2, 2, 2)
rks = [1, 2, 2, 1]
# Create a random TTvector for the initial guess
tt_start = rand_tt(dims, rks)
# Create a random TToperator for the matrix A
A_dims = (2, 2, 2)
A_rks = [1, 2, 2, 1]
A = rand_tto(A_dims, 3)
# Create a random TTvector for the right-hand side b
b = rand_tt(dims, rks)
# Solve the linear system Ax = b using the MALS algorithm
tt_opt = mals_linsolve(A, b, tt_start; tol=1e-12, rmax=4)
# Print the optimized TTvector
println(tt_opt)
# Define the sweep schedule and rank schedule for the eigenvalue problem
sweep_schedule = [2, 4]
rmax_schedule = [2, 3]
# Solve the eigenvalue problem using the MALS algorithm
eigenvalues, tt_eigvec, r_hist = mals_eigsolve(A, tt_start; tol=1e-12, sweep_schedule=sweep_schedule, rmax_schedule=rmax_schedule)
# Print the lowest eigenvalue and the corresponding eigenvector
println("Lowest eigenvalue: ", eigenvalues[end])
println("Corresponding eigenvector: ", tt_eigvec)
println("Rank history: ", r_hist)
TTvector{Float64, 3}(3, [[-0.1680090164331587; 0.8651803414282438;;; 0.33481395133731784; 0.06501738419000773], [0.24962619582718942 0.7411554318516265; 0.9468219229776218 -0.05204542494746296;;; -0.13408322357545147 0.41825708450548504; 0.1524358805577616 -0.5225331672150955], [0.07735675283281927 0.9970034768199967; 0.9970034768199967 -0.07735675283281927;;;]], (2, 2, 2), [1, 2, 2, 1], [0, 1, 1])
Lowest eigenvalue: -4.253725703957233
Corresponding eigenvector: TTvector{Float64, 3}(3, [[-0.8571329053255144; 0.23845444646630198;;; 0.12237250734385603; 0.4398722871642068], [-0.35946471431523075 -0.9329808249061252; 0.9298889666987669 -0.3565878688622657;;; -0.05731508229776541 0.03143497978612769; 0.05297744912101012 -0.03746617385855474], [-0.982779168437792 0.18478394433695333; 0.18478394433695333 0.982779168437792;;;]], (2, 2, 2), [1, 2, 2, 1], [0, 1, 1])
Rank history: [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
DMRG
using TensorTrainNumerics
dims = (2, 2, 2)
rks = [1, 2, 2, 1]
# Create a random TTvector for the initial guess
tt_start = rand_tt(dims, rks)
# Create a random TToperator for the matrix A
A_dims = (2, 2, 2)
A_rks = [1, 2, 2, 1]
A = rand_tto(A_dims, 3)
# Create a random TTvector for the right-hand side b
b = rand_tt(dims, rks)
# Solve the linear system Ax = b using the DMRG algorithm
tt_opt = dmrg_linsolvee(A, b, tt_start; sweep_count=2, N=2, tol=1e-12)
# Print the optimized TTvector
println(tt_opt)
# Define the sweep schedule and rank schedule for the eigenvalue problem
sweep_schedule = [2, 4]
rmax_schedule = [2, 3]
# Solve the eigenvalue problem using the DMRG algorithm
eigenvalues, tt_eigvec, r_hist = dmrg_eigsolve(A, tt_start; N=2, tol=1e-12, sweep_schedule=sweep_schedule, rmax_schedule=rmax_schedule)
# Print the lowest eigenvalue and the corresponding eigenvector
println("Lowest eigenvalue: ", eigenvalues[end])
println("Corresponding eigenvector: ", tt_eigvec)
println("Rank history: ", r_hist)
Rank: 2, Max rank=2
Discarded weight: 0.0
Rank: 2, Max rank=2
Discarded weight: 0.0
Rank: 2, Max rank=2
Discarded weight: 0.0
TTvector{Float64, 3}(3, [[-7.295384418245352; 8.770131173272368;;; 2.5233733226840456; 2.0990539429818247], [-0.5508140696659014 0.5120659445414777; 0.8217685289275636 0.41071550949027447;;; 0.1065308626956522 -0.7228187817327182; 0.09975730958499006 0.21594964107267625], [-0.9921764962283789 0.12484310286106189; 0.12484310286106189 0.9921764962283789;;;]], (2, 2, 2), [1, 2, 2, 1], [0, -1, -1])
Rank: 2, Max rank=2
Discarded weight: 0.0
Rank: 2, Max rank=2
Discarded weight: 0.0
Rank: 2, Max rank=3
Discarded weight: 0.0
Rank: 2, Max rank=3
Discarded weight: 0.0
Rank: 2, Max rank=3
Discarded weight: 0.0
Rank: 2, Max rank=3
Discarded weight: 0.0
Rank: 2, Max rank=3
Discarded weight: 0.0
Lowest eigenvalue: -5.4007520318169755
Corresponding eigenvector: TTvector{Float64, 3}(3, [[-0.19753694303388253; -0.9643221217191257;;; -0.17265885454027347; 0.03536837073986721], [0.6743608289068637 0.6621512927132559; 0.16367187895795152 0.192322456840737;;; -0.6515764175401761 0.47751797234314314; -0.3064264358372678 0.5445588345093667], [-0.5634691184235846 -0.8261371269849506; -0.8261371269849506 0.5634691184235846;;;]], (2, 2, 2), [1, 2, 2, 1], [0, -1, -1])
Rank history: [2, 2, 2, 2, 2, 2, 2]