sandy.core.cov module

class sandy.core.cov.CategoryCov(*args, **kwargs)

Bases: object

Attributes:
data

Covariance matrix as a dataframe.

size

Methods

corr2cov(std)

Produce covariance matrix given correlation matrix and standard deviation array.

from_stack(data_stack, index, columns, values)

Create a covariance matrix from a stacked dataframe.

from_stdev(std)

Construct the covariance matrix from the standard deviation vector.

from_var(var)

Construct the covariance matrix from the variance vector.

get_corr()

Extract correlation matrix.

get_eig([tolerance])

Extract eigenvalues and eigenvectors.

get_L([tolerance])

Extract lower triangular matrix L for which L*L^T == self.

get_std()

Extract standard deviations.

invert()

Method for calculating the inverse matrix.

sampling(nsmp[, seed, pdf, tolerance, relative])

Extract perturbation coefficients according to chosen distribution with covariance from given covariance matrix.

corr2cov(std)

Produce covariance matrix given correlation matrix and standard deviation array. Same as :obj: corr2cov but it works with :obj: CategoryCov instances.

Parameters:
corr:obj: CategoryCov

square 2D correlation matrix

std1d iterable

array of standard deviations

Returns:
obj:

CategoryCov covariance matrix

Examples

Initialize index and columns >>> idx = [“A”, “B”, “C”] >>> std = np.array([1, 2, 3]) >>> corr = sandy.CategoryCov([[1, 0, 2], [0, 3, 0], [2, 0, 1]], index=idx, columns=idx) >>> corr.corr2cov(std)

A B C

A 1.00000e+00 0.00000e+00 6.00000e+00 B 0.00000e+00 1.20000e+01 0.00000e+00 C 6.00000e+00 0.00000e+00 9.00000e+00

property data

Covariance matrix as a dataframe.

Returns:
pandas.DataFrame

covariance matrix

Notes

..note :: In the future, another tests will be implemented to check that the covariance matrix is symmetric and have positive variances.

Examples

>>> import pytest
>>> with pytest.raises(TypeError): sandy.CategoryCov(np.array[1])
>>> with pytest.raises(TypeError): sandy.CategoryCov(np.array([[1, 2], [2, -4]]))
>>> with pytest.raises(TypeError): sandy.CategoryCov(np.array([[1, 2], [3, 4]]))
Attributes:
indexpandas.Index or pandas.MultiIndex

indices

columnspandas.Index or pandas.MultiIndex

columns

valuesnumpy.array

covariance values as float

classmethod from_stack(data_stack, index, columns, values, rows=10000000, kind='upper')

Create a covariance matrix from a stacked dataframe.

Parameters:
data_stackpd.Dataframe

Stacked dataframe.

index1D iterable, optional

Index of the final covariance matrix.

columns1D iterable, optional

Columns of the final covariance matrix.

valuesstr, optional

Name of the column where the values are located.

rowsint, optional

Number of rows to take into account into each loop. The default is 10000000.

kindstr, optional

Select if the stack data represents upper or lower triangular matrix. The default is ‘upper.

Returns:
sandy.CategoryCov

Covarinace matrix.

Examples

If the stack data represents the covariance matrix: >>> S = pd.DataFrame(np.array([[1, 1, 1], [1, 2, 1], [1, 1, 1]])) >>> S = S.stack().reset_index().rename(columns = {‘level_0’: ‘dim1’, ‘level_1’: ‘dim2’, 0: ‘cov’}) >>> S = S[S[‘cov’] != 0] >>> sandy.CategoryCov.from_stack(S, index=[‘dim1’], columns=[‘dim2’], values=’cov’, kind=’all’) dim2 0 1 2 dim1 0 1.00000e+00 1.00000e+00 1.00000e+00 1 1.00000e+00 2.00000e+00 1.00000e+00 2 1.00000e+00 1.00000e+00 1.00000e+00

If the stack data represents only the upper triangular part of the covariance matrix: >>> test_1 = sandy.CategoryCov.from_stack(minimal_covtest, index=[“MAT”, “MT”, “E”], columns=[“MAT1”, “MT1”, “E1”], values=’VAL’).data >>> test_1

MAT1 9437

MT1 2 102 E1 1.00000e-02 2.00000e+05 1.00000e-02 2.00000e+05

MAT MT E 9437 2 1.00000e-02 2.00000e-02 0.00000e+00 4.00000e-02 0.00000e+00

2.00000e+05 0.00000e+00 9.00000e-02 0.00000e+00 5.00000e-02

102 1.00000e-02 4.00000e-02 0.00000e+00 1.00000e-02 0.00000e+00

2.00000e+05 0.00000e+00 5.00000e-02 0.00000e+00 1.00000e-02

>>> test_2 = sandy.CategoryCov.from_stack(minimal_covtest, index=["MAT", "MT", "E"], columns=["MAT1", "MT1", "E1"], values='VAL', rows=1).data
>>> test_2
                MAT1        9437
            MT1         2                           102
            E1          1.00000e-02     2.00000e+05     1.00000e-02     2.00000e+05
MAT        MT   E                               
9437    2       1.00000e-02     2.00000e-02     0.00000e+00     4.00000e-02     0.00000e+00
            2.00000e+05 0.00000e+00     9.00000e-02     0.00000e+00     5.00000e-02
      102       1.00000e-02     4.00000e-02     0.00000e+00     1.00000e-02     0.00000e+00
            2.00000e+05 0.00000e+00     5.00000e-02     0.00000e+00     1.00000e-02
>>> assert (test_1 == test_2).all().all()

If the stack data represents only the lower triangular part of the covariance matrix: >>> test_1 = sandy.CategoryCov.from_stack(minimal_covtest, index=[“MAT1”, “MT1”, “E1”], columns=[“MAT”, “MT”, “E”], values=’VAL’, kind=”lower”).data >>> test_1

MAT 9437

MT 2 102 E 1.00000e-02 2.00000e+05 1.00000e-02 2.00000e+05

MAT1 MT1 E1 9437 2 1.00000e-02 2.00000e-02 0.00000e+00 4.00000e-02 0.00000e+00

2.00000e+05 0.00000e+00 9.00000e-02 0.00000e+00 5.00000e-02

102 1.00000e-02 4.00000e-02 0.00000e+00 1.00000e-02 0.00000e+00

2.00000e+05 0.00000e+00 5.00000e-02 0.00000e+00 1.00000e-02

>>> test_2 = sandy.CategoryCov.from_stack(minimal_covtest, index=["MAT1", "MT1", "E1"], columns=["MAT", "MT", "E"], values='VAL', kind="lower", rows=1).data
>>> test_2
                MAT         9437
            MT          2                           102
            E           1.00000e-02     2.00000e+05     1.00000e-02     2.00000e+05
MAT1  MT1       E1                              
9437    2       1.00000e-02     2.00000e-02     0.00000e+00     4.00000e-02     0.00000e+00
            2.00000e+05 0.00000e+00     9.00000e-02     0.00000e+00     5.00000e-02
      102       1.00000e-02     4.00000e-02     0.00000e+00     1.00000e-02     0.00000e+00
            2.00000e+05 0.00000e+00     5.00000e-02     0.00000e+00     1.00000e-02
>>> assert (test_1 == test_2).all().all()
classmethod from_stdev(std)

Construct the covariance matrix from the standard deviation vector.

Parameters:
var1D iterable

Standar deviation vector.

Returns:
CategoryCov

Object containing the covariance matrix.

classmethod from_var(var)

Construct the covariance matrix from the variance vector.

Parameters:
var1D iterable

Variance vector.

Returns:
CategoryCov

Object containing the covariance matrix.

get_L(tolerance=None)

Extract lower triangular matrix L for which L*L^T == self.

Parameters:
rowsint, optional

Option to use row calculation for matrix calculations. This option defines the number of lines to be taken into account in each loop. The default is None.

tolerancefloat, optional, default is None

replace all eigenvalues smaller than a given tolerance with zeros.

Returns:
pandas.DataFrame

Cholesky descomposition low triangular matrix.

Examples

Positive define matrix: >>> a = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]) >>> sandy.CategoryCov(a).get_L()

0 1 2

0 -2.00000e+00 0.00000e+00 0.00000e+00 1 -6.00000e+00 1.00000e+00 0.00000e+00 2 8.00000e+00 5.00000e+00 3.00000e+00

>>> sandy.CategoryCov(a).get_L(tolerance=0)
               0                  1               2
0       -2.00000e+00    0.00000e+00     0.00000e+00
1       -6.00000e+00    1.00000e+00     0.00000e+00
2        8.00000e+00    5.00000e+00     3.00000e+00
>>> sandy.CategoryCov([[1, -2],[-2, 3]]).get_L(tolerance=0)
               0                  1
0       -1.08204e+00    0.00000e+00
1        1.75078e+00    0.00000e+00

Decomposition test: >>> L = sandy.CategoryCov(a).get_L() >>> L.dot(L.T)

0 1 2

0 4.00000e+00 1.20000e+01 -1.60000e+01 1 1.20000e+01 3.70000e+01 -4.30000e+01 2 -1.60000e+01 -4.30000e+01 9.80000e+01

Matrix with negative eigenvalues, tolerance of 0: >>> L = sandy.CategoryCov([[1, -2],[-2, 3]]).get_L(tolerance=0) >>> L.dot(L.T)

0 1

0 1.17082e+00 -1.89443e+00 1 -1.89443e+00 3.06525e+00

get_corr()

Extract correlation matrix.

Returns:
df:obj: CetgoryCov

correlation matrix

Examples

>>> sandy.CategoryCov([[4, 2.4],[2.4, 9]]).get_corr()
            0           1
0 1.00000e+00 4.00000e-01
1 4.00000e-01 1.00000e+00
get_eig(tolerance=None)

Extract eigenvalues and eigenvectors.

Parameters:
tolerancefloat, optional, default is None

replace all eigenvalues smaller than a given tolerance with zeros. The replacement condition is implemented as:

\[$$\]
rac{e_i}{e_{MAX}} < tolerance

$$

Then, a tolerance=1e-3 will replace all eigenvalues 1000 times smaller than the largest eigenvalue. A tolerance=0 will replace all negative eigenvalues.

Returns:
Pandas.Series

array of eigenvalues

pandas.DataFrame

matrix of eigenvectors

Notes

Note

only the real part of the eigenvalues is preserved

Note

the discussion associated to the implementeation of this algorithm is available [here](https://github.com/luca-fiorito-11/sandy/discussions/135)

Examples

Extract eigenvalues of correlation matrix. >>> sandy.CategoryCov([[1, 0.4], [0.4, 1]]).get_eig()[0] 0 1.40000e+00 1 6.00000e-01 Name: EIG, dtype: float64

Extract eigenvectors of correlation matrix. >>> sandy.CategoryCov([[1, 0.4], [0.4, 1]]).get_eig()[1]

0 1

0 7.07107e-01 -7.07107e-01 1 7.07107e-01 7.07107e-01

Extract eigenvalues of covariance matrix. >>> sandy.CategoryCov([[0.1, 0.1], [0.1, 1]]).get_eig()[0] 0 8.90228e-02 1 1.01098e+00 Name: EIG, dtype: float64

Set up a tolerance. >>> sandy.CategoryCov([[0.1, 0.1], [0.1, 1]]).get_eig(tolerance=0.1)[0] 0 0.00000e+00 1 1.01098e+00 Name: EIG, dtype: float64

Test with negative eigenvalues. >>> sandy.CategoryCov([[1, 2], [2, 1]]).get_eig()[0] 0 3.00000e+00 1 -1.00000e+00 Name: EIG, dtype: float64

Replace negative eigenvalues. >>> sandy.CategoryCov([[1, 2], [2, 1]]).get_eig(tolerance=0)[0] 0 3.00000e+00 1 0.00000e+00 Name: EIG, dtype: float64

Check output size. >>> cov = sandy.CategoryCov.random_cov(50, seed=11) >>> assert cov.get_eig()[0].size == cov.data.shape[0] == 50

>>> sandy.CategoryCov([[1, 0.2, 0.1], [0.2, 2, 0], [0.1, 0, 3]]).get_eig()[0]
0   9.56764e-01
1   2.03815e+00
2   3.00509e+00
Name: EIG, dtype: float64

Real test on H1 file >>> endf6 = sandy.get_endf6_file(“jeff_33”, “xs”, 10010) >>> ek = sandy.energy_grids.CASMO12 >>> err = endf6.get_errorr(errorr_kws=dict(ek=ek), err=1)[“errorr33”] >>> cov = err.get_cov() >>> cov.get_eig()[0].sort_values(ascending=False).head(7) 0 3.66411e-01 1 7.05311e-03 2 1.55346e-03 3 1.60175e-04 4 1.81374e-05 5 1.81078e-06 6 1.26691e-07 Name: EIG, dtype: float64

>>> assert not (cov.get_eig()[0] >= 0).all()
>>> assert (cov.get_eig(tolerance=0)[0] >= 0).all()
get_std()

Extract standard deviations.

Returns:
pandas.Series

1d array of standard deviations

Examples

>>> sandy.CategoryCov([[1, 0.4],[0.4, 1]]).get_std()
0   1.00000e+00
1   1.00000e+00
Name: STD, dtype: float64
get_svd(check_finite=False, **kwargs)
gls_cov_update(S, Vy_extra=None)

Perform GlS update for a given covariance matrix, sensitivity and covariance matrix of the extra information: .. math:

$$
V_{x_{post}} = V_{x_{prior}} - V_{x_{prior}}\cdot S^T\cdot \left(S\cdot V_{x_{prior}}\cdot S^T + V_{y_{extra}}
ight)^{-1}cdot Scdot V_{x_{prior}}

$$

Parameters:
Vy_extra2D iterable or sigle element 1D iterable

covariance matrix of the extra information, (M, M) or (1,).

S2D or 1D iterable

Sensitivity matrix (M, N) or sensitivity vector (N,).

Returns:
sandy.CategoryCov

CategoryCov object corresponding to the updated covariance matrix adjusted with the GLS technique.

Notes

Note

If Vy_extra=None the constraint GLS update technique

will be performed

invert()

Method for calculating the inverse matrix.

Returns:
pandas.DataFrame

The inverse matrix.

Notes

Many covariance matrices for nuclear data are ill-defined and might have condition numbers that make the matrix inversion process impossible. To make up for this limitation we produce the (Moore-Penrose) pseudo-inverse of a Hermitian matrix, as implemented in numpy. Default options are used. This method does not require any pre-processing of the covariance data, e.g. removing zeros from the matrix diagonal or truncating eigenvalues.

>>> c = sandy.CategoryCov(np.diag(np.array([1, 2, 3])))
>>> c.invert()
            0           1           2
0 1.00000e+00 0.00000e+00 0.00000e+00
1 0.00000e+00 5.00000e-01 0.00000e+00
2 0.00000e+00 0.00000e+00 3.33333e-01

Test product $A^T A = 1$. >>> c = sandy.CategoryCov([[2, 0.5], [0.5, 2]]) >>> np.testing.assert_array_almost_equal(c.data @ c.invert(), np.eye(2))

Test inverse of inverse. >>> c = sandy.CategoryCov(np.diag(np.array([1, 2, 3]))) >>> np.testing.assert_array_equal(sandy.CategoryCov(c.invert()).invert(), c.data)

Test on real ND covariance. With previous implementation this test failed. >>> c = sandy.get_endf6_file(“jeff_33”, “xs”, 10010).get_errorr(err=1, errorr_kws=dict(mt=102))[“errorr33”].get_cov() >>> cinv = c.invert() >>> a = c.data.values >>> b = cinv.values >>> np.testing.assert_array_almost_equal(a, a @ b @ a, decimal=4) >>> assert (cinv.index == c.data.index).all() >>> assert (cinv.columns == c.data.columns).all()

classmethod random_corr(size, correlations=True, seed=None, **kwargs)
>>> sandy.CategoryCov.random_corr(2, seed=1)
            0           1
0 1.00000e+00 4.40649e-01
1 4.40649e-01 1.00000e+00
>>> sandy.CategoryCov.random_corr(2, correlations=False, seed=1)
            0           1
0 1.00000e+00 0.00000e+00
1 0.00000e+00 1.00000e+00
classmethod random_cov(size, stdmin=0.0, stdmax=1.0, correlations=True, seed=None, **kwargs)

Construct a covariance matrix with random values

Parameters:
sizeint

Dimension of the original matrix

stdminfloat, default is 0

minimum value of the uniform standard deviation vector

stdmaxfloat, default is 1

maximum value of the uniform standard deviation vector

correlationbool, default is True

flag to insert the random correlations in the covariance matrix

seedint, optional, default is None

seed for the random number generator (by default use numpy dafault pseudo-random number generator)

Returns:
CategoryCov

object containing covariance matrix

Examples

>>> sandy.CategoryCov.random_cov(2, seed=1)
            0           1
0 2.15373e-02 5.97134e-03
1 5.97134e-03 8.52642e-03
sampling(nsmp, seed=None, pdf='normal', tolerance=1e-10, relative=True, **kwargs)

Extract perturbation coefficients according to chosen distribution with covariance from given covariance matrix. See note for non-normal distribution sampling. The samples’ mean will be 1 or 0 depending on relative kwarg.

Parameters:
nsmpint

number of samples.

seedint, optional, default is None

seed for the random number generator (by default use numpy dafault pseudo-random number generator).

pdfstr, optional, default is ‘normal’

random numbers distribution. Available distributions are:

  • ‘normal’

  • ‘uniform’

  • ‘lognormal’

tolerancefloat, optional, default is None

replace all eigenvalues smaller than a given tolerance with zeros.

relativebool, optional, default is True

flag to switch between relative and absolute covariance matrix handling

  • True: samples’ mean will be 1

  • False: samples’ mean will be 0

Returns:
sandy.Samples

object containing samples

Notes

Note

sampling with uniform distribution is performed on diagonal covariance matrix, neglecting all correlations.

Note

sampling with relative covariance matrix is performed setting all the negative perturbation coefficients equal to 0 and the ones larger than 2 equal to 2 for normal distribution, or adjusting the standard deviations in such a way that negative samples are avoided for uniform distribution.

Note

sampling with lognormal distribution gives a set of samples with mean=1 since a lognormal distribution cannot have mean=0. In this case the relative parameter does not apply to it.

Examples

Common parameters. >>> seed = 11 >>> nsmp = 1e5

Create Positive-Definite covariance matrix with small stdev. >>> index = columns = [“A”, “B”] >>> c = pd.DataFrame([[1, 0.4],[0.4, 1]], index=index, columns=index) / 10 >>> cov = sandy.CategoryCov(c)

Draw relative samples using different distributions. >>> smp_n = cov.sampling(nsmp, seed=seed, pdf=’normal’) >>> smp_ln = cov.sampling(nsmp, seed=seed, pdf=’lognormal’) >>> smp_u = cov.sampling(nsmp, seed=seed, pdf=’uniform’)

The sample mean converges to a unit vector. >>> np.testing.assert_array_almost_equal(smp_n.get_mean(), [1, 1], decimal=2) >>> np.testing.assert_array_almost_equal(smp_ln.get_mean(), [1, 1], decimal=2) >>> np.testing.assert_array_almost_equal(smp_u.get_mean(), [1, 1], decimal=2)

The sample covariance converges to the original one. >>> np.testing.assert_array_almost_equal(smp_n.get_cov(), c, decimal=3) >>> np.testing.assert_array_almost_equal(smp_ln.get_cov(), c, decimal=3) >>> np.testing.assert_array_almost_equal(np.diag(smp_u.get_cov()), np.diag(c), decimal=3)

Samples are reproducible by setting a seed. assert cov.sampling(nsmp, seed=seed, pdf=’normal’).data.equals(smp_n.data)

Create Positive-Definite covariance matrix with small stdev (small negative eig). >>> c = pd.DataFrame([[1, 1.2],[1.2, 1]], index=index, columns=index) / 10 >>> cov = sandy.CategoryCov(c)

>>> smp_0 = cov.sampling(nsmp, seed=seed, pdf='normal', tolerance=0)
>>> np.testing.assert_array_almost_equal(np.diag(smp_0.get_cov()), np.diag(c), decimal=2)
>>> smp_inf = cov.sampling(nsmp, seed=seed, pdf='normal', tolerance=np.inf)
>>> import pytest
>>> with pytest.raises(Exception):
...    raise np.testing.assert_array_almost_equal(np.diag(smp_0.get_cov()), np.diag(c), decimal=1)

Create Positive-Definite covariance matrix with small stdev (large negative eig). >>> c = pd.DataFrame([[1, 4],[4, 1]], index=index, columns=index) / 10 >>> cov = sandy.CategoryCov(c)

Samples kind of converge only if we set a low tolerance >>> smp_0 = cov.sampling(nsmp, seed=seed, pdf=’normal’, tolerance=0) >>> with pytest.raises(Exception): … raise np.testing.assert_array_almost_equal(np.diag(smp_0.get_cov()), np.diag(c), decimal=1) >>> smp_inf = cov.sampling(nsmp, seed=seed, pdf=’normal’, tolerance=np.inf) >>> with pytest.raises(Exception): … raise np.testing.assert_array_almost_equal(np.diag(smp_0.get_cov()), np.diag(c), decimal=1)

Create Positive-Definite covariance matrix with large stdev. >>> index = columns = [“A”, “B”] >>> c = pd.DataFrame([[1, 0.4],[0.4, 1]], index=index, columns=index) / 10 >>> cov = sandy.CategoryCov(c)

Need to increase the amount of samples >>> nsmp = 1e6

The sample mean still converges to a unit vector. >>> np.testing.assert_array_almost_equal(smp_n.get_mean(), [1, 1], decimal=2) >>> np.testing.assert_array_almost_equal(smp_ln.get_mean(), [1, 1], decimal=2) >>> np.testing.assert_array_almost_equal(smp_u.get_mean(), [1, 1], decimal=2)

Only the lognormal covariance still converges. >>> with pytest.raises(Exception): … raise np.testing.assert_array_almost_equal(smp_n.get_cov(), c, decimal=1) >>> np.testing.assert_array_almost_equal(smp_ln.get_cov(), c, decimal=2) >>> with pytest.raises(Exception): … raise np.testing.assert_array_almost_equal(np.diag(smp_u.get_cov()), np.diag(c), decimal=1)

sandwich(s)

Apply the “sandwich formula” to the CategoryCov object for a given sensitivity. According with http://dx.doi.org/10.1016/j.anucene.2015.10.027, the moment propagation equation is implemented as:

\[$$ V_R = S\cdot V_P\cdot S^T $$\]
Parameters:
s1D or 2D iterable

General sensitivities (N,) or (M, N) with N the size of the CategoryCov object.

Returns:
sandy.CategoryCov

CategoryCov object corresponding to the response covariance matrix obtained with the sandwich formula.

Examples

>>> var = np.array([1, 2, 3])
>>> s = np.array([[1, 2, 3]])
>>> assert s.shape == (1, 3)
>>> cov = sandy.CategoryCov.from_var(var)
>>> cov.sandwich(s)
            0
0 3.60000e+01
>>> s = np.array([1, 2, 3])
>>> var = pd.Series([1, 2, 3])
>>> cov = sandy.CategoryCov.from_var(var)
>>> sensitivity = np.diag(s)
>>> cov.sandwich(sensitivity)
            0           1           2
0 1.00000e+00 0.00000e+00 0.00000e+00
1 0.00000e+00 8.00000e+00 0.00000e+00
2 0.00000e+00 0.00000e+00 2.70000e+01
>>> s = pd.DataFrame([[1, 0, 1], [0, 1, 1]], index=[2, 3], columns=[2, 3, 4]).T
>>> cov = pd.DataFrame([[1, 0], [0, 1]], index=[2, 3], columns=[2, 3])
>>> cov = sandy.CategoryCov(cov)
>>> cov.sandwich(s)
            2           3           4
2 1.00000e+00 0.00000e+00 1.00000e+00
3 0.00000e+00 1.00000e+00 1.00000e+00
4 1.00000e+00 1.00000e+00 2.00000e+00
property size
to_sparse(method='csr_matrix')

Method to extract CategoryCov values into a sparse matrix

Parameters:
methodstr, optional

SciPy 2-D sparse matrix. The default is ‘csr_matrix’.

Returns:
data_spscipy.sparse.matrix

CategoryCov instance values stored as a sparse matrix

Methods

`csr_matrix`:

Compressed Sparse Row matrix.

`bsr_matrix`:

Block Sparse Row matrix.

`coo_matrix`:

A sparse matrix in COOrdinate format.

`csc_matrix`:

Compressed Sparse Column matrix.

`dia_matrix`:

Sparse matrix with DIAgonal storage.

`dok_matrix`:

Dictionary Of Keys based sparse matrix.

`lil_matrix`:

Row-based list of lists sparse matrix.

sandy.core.cov.corr2cov(corr, s)

Produce covariance matrix given correlation matrix and standard deviation array.

Parameters:
corr2D iterable

square 2D correlation matrix

s1D iterable

1D iterable with standard deviations

Returns:
numpy.ndarray

square 2D covariance matrix

Examples

Test with integers >>> s = np.array([1, 2, 3]) >>> corr = np.array([[1, 0, 2], [0, 3, 0], [2, 0, 1]]) >>> corr2cov(corr, s) array([[ 1, 0, 6],

[ 0, 12, 0], [ 6, 0, 9]])

Test with float >>> corr2cov(corr, s.astype(float)) array([[ 1., 0., 6.],

[ 0., 12., 0.], [ 6., 0., 9.]])

sandy.core.cov.random_corr(size, correlations=True, seed=None)
sandy.core.cov.random_cov(size, stdmin=0.0, stdmax=1.0, correlations=True, seed=None)
sandy.core.cov.triu_matrix(matrix, kind='upper')

Given the upper or lower triangular matrix , return the full symmetric matrix.

Parameters:
matrix2d iterable

Upper triangular matrix

kindstr, optional

Select if matrix variable is upper or lower triangular matrix. The default is ‘upper’

Returns:
pd.Dataframe

reconstructed symmetric matrix

Examples

>>> S = pd.DataFrame(np.array([[1, 2, 1], [0, 2, 4], [0, 0, 3]]))
>>> triu_matrix(S).data
            0           1           2
0 1.00000e+00 2.00000e+00 1.00000e+00
1 2.00000e+00 2.00000e+00 4.00000e+00
2 1.00000e+00 4.00000e+00 3.00000e+00

Overwrite the lower triangular part of the matrix: >>> S = pd.DataFrame(np.array([[1, 2, 1], [-8, 2, 4], [-6, -5, 3]])) >>> triu_matrix(S).data

0 1 2

0 1.00000e+00 2.00000e+00 1.00000e+00 1 2.00000e+00 2.00000e+00 4.00000e+00 2 1.00000e+00 4.00000e+00 3.00000e+00

Test for lower triangular matrix: >>> S = pd.DataFrame(np.array([[3, 0, 0], [5, 2, 0], [1, 2, 1]])) >>> triu_matrix(S, kind=’lower’).data

0 1 2

0 3.00000e+00 5.00000e+00 1.00000e+00 1 5.00000e+00 2.00000e+00 2.00000e+00 2 1.00000e+00 2.00000e+00 1.00000e+00

Overwrite the upper triangular part of the matrix: >>> S = pd.DataFrame(np.array([[3, 5, -9], [5, 2, 8], [1, 2, 1]])) >>> triu_matrix(S, kind=’lower’).data

0 1 2

0 3.00000e+00 5.00000e+00 1.00000e+00 1 5.00000e+00 2.00000e+00 2.00000e+00 2 1.00000e+00 2.00000e+00 1.00000e+00