ift issueshttps://gitlab.mpcdf.mpg.de/groups/ift/-/issues2020-06-02T07:46:37Zhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/304Broken jacobian2020-06-02T07:46:37ZPhilipp Arrasparras@mpa-garching.mpg.deBroken jacobianI have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1....I have discovered an example in which the Jacobian test breaks. I do not see the reason (yet?) why it should not work.
```
import numpy as np
import nifty6 as ift
dom = ift.UnstructuredDomain(10)
op = (1.j*(ift.ScalingOperator(dom, 1.).log()).imag).exp()
loc = ift.from_random(op.domain, dtype=np.complex128) + 5
ift.extra.check_jacobian_consistency(op, loc)
```
Note the `+ 5` which shall make sure that the branch cut of the logarithm is avoided.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/306Complex samples are inconsistent2020-06-19T12:31:22ZReimar H LeikeComplex samples are inconsistentWhen we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamilto...When we draw samples with complex `dtype`, we multiply their magnitude with `sqrt(0.5)`.
This is done because then a sample from a variance of 1 also has standard deviation of 1 on average.
However, this is inconsistent with its Hamiltonian.
If we instead regard a complex number as a pair of real numbers, then in order to reproduce the sample variance, we would need the vector (real, imag) to have covariance
```
/ 0.5 0 \
| |
\ 0 0.5 /
```
However this covariance would lead to the Hamiltonian `(real**2 + imag**2)*2` which is double of what we would get when using the complex likelihood with covariance 1.Philipp FrankPhilipp Frankhttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/302Invalid call to inverse2020-06-24T15:04:05ZGordian EdenhoferInvalid call to inverseWhy is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
...Why is nifty calling `.inverse` of an operator that definitely does not support it?
```ipython
In [40]: n2f_jac
Out[40]:
ChainOperator:
DiagonalOperator
OuterProduct
In [41]: n2f_jac.domain
Out[41]:
DomainTuple:
HPSpace(nside=16)
In [42]: n2f_jac.target
Out[42]:
DomainTuple:
UnstructuredDomain(shape=(30,))
HPSpace(nside=16)
In [43]: n2f_jac.capability
Out[43]: 3
In [44]: ift.extra.consistency_check(n2f_jac)
~/Projects/nifty/nifty6/extra.py in consistency_check(op, domain_dtype, target_dtype, atol, rtol, only_r_linear)
201 _actual_domain_check_linear(op, domain_dtype)
202 _actual_domain_check_linear(op.adjoint, target_dtype)
--> 203 _actual_domain_check_linear(op.inverse, target_dtype)
~/Projects/nifty/nifty6/operators/linear_operator.py in inverse(self)
86 Returns a LinearOperator object which behaves as if it were
87 the inverse of this operator."""
---> 88 return self._flip_modes(self.INVERSE_BIT)
~/Projects/nifty/nifty6/operators/chain_operator.py in _flip_modes(self, trafo)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/chain_operator.py in <listcomp>(.0)
123 if trafo == ADJ or trafo == INV:
124 return self.make(
--> 125 [op._flip_modes(trafo) for op in reversed(self._ops)])
~/Projects/nifty/nifty6/operators/diagonal_operator.py in _flip_modes(self, trafo)
149 if trafo & self.INVERSE_BIT:
--> 150 xdiag = 1./xdiag # This operator contains zeros zeros on one axis
FloatingPointError: divide by zero encountered in true_divide
In [45]: n2f_jac(r).val
Out[45]:
array([[ 2.87943447e-02, 2.84809183e-02, 2.86276722e-02, ...,
2.85864172e-02, 2.88285874e-02, 2.88791209e-02],
...,
[ 0.00000000e+00, 1.46037080e-16, 0.00000000e+00, ...,
1.43154658e-16, -1.34704751e-16, 0.00000000e+00], # zero except for numerical fluctuations
[ 3.18092631e-02, 3.11583354e-02, 3.14395536e-02, ...,
3.13576202e-02, 3.18985295e-02, 3.20487145e-02]])
```https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/309Inconsistency in interface2020-06-29T12:27:54ZPhilipp Arrasparras@mpa-garching.mpg.deInconsistency in interface`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?`ift.StructuredDomain.total_volume` is a property, `ift.DomainTuple.total_volume` is a method. Shall we unify that?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/254Better support for partial inference2020-06-30T14:00:51ZMartin ReineckeBetter support for partial inferenceI have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially const...I have added the `partial-const` branch for tweaks that improve NIFTy's support for partial inference.
The current idea is that one first builds the full Hamiltonian (as before), and then obtains a simplified version for partially constant inputs by calling its method `simplify_for_constant_input()` with the constant input components.
All that needs to be done (I hope) is to specialize this method for all composed operators, i.e. those that call other operators internally.
@kjako, @reimar, @parras, @pfrank: does the principal idea look sound to you?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/311Partial Constant EnergyAdapter2020-07-18T17:21:28ZPhilipp FrankPartial Constant EnergyAdapterThe following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
...The following minimal example raises an unexpected error on initialization of the `EnergyAdapter` object:
import nifty7 as ift
d = ift.UnstructuredDomain(10)
a = ift.FieldAdapter(d, 'hi') + ift.FieldAdapter(d, 'ho')
lh = ift.GaussianEnergy(domain =a.target)@a
x = ift.from_random(lh.domain)
H = ift.EnergyAdapter(x, lh, constants=['hi'])Philipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/314Always mirror samples2020-11-28T14:12:32ZPhilipp Arrasparras@mpa-garching.mpg.deAlways mirror samples@reimar @kjako @ensslint I just realized that `mirror_samples=True` is not the default for `MetricGaussianKL`. What is your opinion on this? I have the feeling that virtually all applications benefit from mirrored samples.@reimar @kjako @ensslint I just realized that `mirror_samples=True` is not the default for `MetricGaussianKL`. What is your opinion on this? I have the feeling that virtually all applications benefit from mirrored samples.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/315Unexpected get_sqrt behaviour2020-12-02T18:48:49ZSebastian HutschenreuterUnexpected get_sqrt behaviourThe following code raises a ValueError claiming no positive definiteness of the operator
```python
import nifty7 as ift
ds = ift.makeDomain(ift.RGSpace(100))
dd = ift.DiagonalOperator(ds, 4).get_sqrt()
```
while
```python
sd = ift.Sca...The following code raises a ValueError claiming no positive definiteness of the operator
```python
import nifty7 as ift
ds = ift.makeDomain(ift.RGSpace(100))
dd = ift.DiagonalOperator(ds, 4).get_sqrt()
```
while
```python
sd = ift.ScalingOperator(ift.full(ds, 4)).get_sqrt()
```
is fine.
Presumably that's a bug and the 'not' in the following code line in
[1](https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/operators/diagonal_operator.py#L170) is superfluous?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/316Possibly unexpected behavior of correlated_field2021-01-13T11:13:02ZJakob RothPossibly unexpected behavior of correlated_field@gedenhof and I stumbled over the following behavior of the correlated field. We expected that the correlation structure in the correct units keeps unchanged when changing the target space's distances. Or in other words, if we increase ...@gedenhof and I stumbled over the following behavior of the correlated field. We expected that the correlation structure in the correct units keeps unchanged when changing the target space's distances. Or in other words, if we increase the pixel distance, we just zoom out. And when we decrease the pixel distance, we zoom in. However, this doesn't seem to be the case, as you can see here: In the right plot, the pixel distance is a factor of 10 larger.
![correlated_field](/uploads/bd5c783c172b52b23a347e2e70b7d87d/correlated_field.png)
When we create fields with a fixed power spectrum, everything works as expected. Again, in the right plot, the pixel distance is a factor of 10 larger. Here the right plot is a zoomed out version of the left.
![fixed_power_spec](/uploads/b3299fd32f0100ff9ea306f6939ab805/fixed_power_spec.png)
To us, this seems unintended, but we could not figure out how to change the correlated_field operator. @parras, @pfrank Could one of you help us to fix this, or explain why this is intended to be like this?
[pix_dist.py](/uploads/496ad8a62ee83f1c81a8fa39ba09a843/pix_dist.py)https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/312Unexpected amplification of fluctuations in the correlated field model by add...2021-01-13T18:02:56ZGordian EdenhoferUnexpected amplification of fluctuations in the correlated field model by adding axesI am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the corr...I am working on a problem in which I might dynamically add axes and as such was relying on the normalization of the zero-mode in the correlated field model. I was surprised to notice that with each added axis the fluctuations of the correlated field get amplified by the inverse of `offset_std_mean`. I am using NIFTy6 @ afc3e2df5e6b6d5f90eb414f07beeeefec2a085d .
The following code snippets reproduces the described amplification
```python
cf_axes = {
"offset_mean": 0.,
"offset_std_mean": 1e-3,
"offset_std_std": 1e-4,
"prefix": ''
}
temporal_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.0,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.5,
'prefix': 'temporal_axis'
}
fish_axis_fluctuations = {
'fluctuations_mean': 1.,
'fluctuations_stddev': 0.1,
'loglogavgslope_mean': -1.5,
'loglogavgslope_stddev': 0.5,
'flexibility_mean': 2.5,
'flexibility_stddev': 1.0,
'asperity_mean': 0.5,
'asperity_stddev': 0.3,
'prefix': 'fish_axis'
}
cfmaker = ift.CorrelatedFieldMaker.make(**cf_axes)
cfmaker.add_fluctuations(ift.RGSpace(7638), **temporal_axis_fluctuations)
# cfmaker.add_fluctuations(ift.RGSpace(8), **fish_axis_fluctuations)
```
yields
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1
```
which is what I would expect. However, if I uncomment the last line in the second cell above, the result becomes
```
cf = cfmaker.finalize()
np.mean([cf(ift.from_random(cf.domain)).s_std() for _ in range(100)]) # \approx 1e+3
```
This does not make sense to me and I was expecting the same result as in the previous cell.
In short: Am I missing something or is there a bug in the correlated field model?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/310Unexpected behaviour of ducktape and ducktape_left2021-01-22T14:13:41ZPhilipp FrankUnexpected behaviour of ducktape and ducktape_leftI've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not pr...I've noticed an unexpected behaviour of `ducktape_left` and `ducktape` in combination with `Linearization`.
The following example code:
a = ift.Linearization.make_var(ift.from_random(ift.RGSpace(3))).ducktape_left('a')
does not produce an error but produces something that is not an instance of `Linearization` any more. Instead it returns an `_OpChain`. The same is true if we replace `ducktape_left` with `ducktape`.
I suspect this is due to the fact that `Linearization` is now an instance of `Operator` but does not implement `ducktape` itself.
In contrast if we try this with `Field`:
a = ift.from_random(ift.RGSpace(3)).ducktape_left('a')
we get an error.
I see two possible solutions for this: either we disable the (currently unintended?) support of `Linearization` in `ducktape` and `ducktape_left` or we implement a proper version of them for `Field` and `Linearization`.
@all does somebody know what is going on here?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/318Ducc dependency2021-03-07T13:33:04ZJakob RothDucc dependencySo far DUCC was an optional dependency of nifty7. Now it is strictly required because of the import in https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/library/nft.py#L20
Is this intended?So far DUCC was an optional dependency of nifty7. Now it is strictly required because of the import in https://gitlab.mpcdf.mpg.de/ift/nifty/-/blob/NIFTy_7/src/library/nft.py#L20
Is this intended?https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/319AttributeError: 'OperatorAdapter' object has no attribute 'duckape'2021-03-07T22:57:53ZGordian EdenhoferAttributeError: 'OperatorAdapter' object has no attribute 'duckape'OperatorAdapter, e.g. the adjoint of an operator, should support ducktaping.OperatorAdapter, e.g. the adjoint of an operator, should support ducktaping.https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/317MaternKernel implementation2021-03-24T09:12:11ZVincent EberleMaternKernel implementation# Matern Kernel
I thought it is a good idea to create this issue to keep everyone up-to-date who is involved in developement or already uses the Matern_kernel.
https://gitlab.mpcdf.mpg.de/ift/nifty/-/tree/matern_kernel
@matteani @jroth...# Matern Kernel
I thought it is a good idea to create this issue to keep everyone up-to-date who is involved in developement or already uses the Matern_kernel.
https://gitlab.mpcdf.mpg.de/ift/nifty/-/tree/matern_kernel
@matteani @jroth @parras @sding @gedenhof @vkainz
If I forgot somebody, just mention them to make sure that they are notified.
# TODO
- [x] Implementation of a**b by Simon and Philipp
- [x] Tests
- [x] Cosmetics
- [x] Demo
- [x] Make statistics_summary work for Matern Kernel fluctuationshttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/320Is `ift.operators.operator._FunctionApplier` exposed to the NIFTy namespace? ...2021-04-07T10:02:09ZLukas PlatzIs `ift.operators.operator._FunctionApplier` exposed to the NIFTy namespace? If not, why?I just had the case where I wanted to prepend a pointwise operator to a give operator. For *ap*pending a pointwise operator, we have the syntax `op.ptw()`, but do we also have a direct way to *pre*pend an operator?
What I came up with i...I just had the case where I wanted to prepend a pointwise operator to a give operator. For *ap*pending a pointwise operator, we have the syntax `op.ptw()`, but do we also have a direct way to *pre*pend an operator?
What I came up with in a hunch was `op @ ift.ScalingOperator(op.domain, 1.).abs()`, but that is just horrible.
Is there a simple way to do this that I forgot? If not, why don't we expose `operator._FunctionApplier` as `ift.FunctionApplier`?
Cheers!https://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/308Fisher test for VariableCovarianceGaussianEnergy not sensitive2021-04-09T12:56:45ZPhilipp Arrasparras@mpa-garching.mpg.deFisher test for VariableCovarianceGaussianEnergy not sensitiveOn the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, ...On the branch `metric_tests` I have introduced a breaking factor (949578182c660faee1ea7344d79993d9d1b35310) and the test does not break. I am not sure how to fix it. Is it even possible to fix it in this case? Do we need two test cases, one where the mean is sampled and one where the variance is sampled?Reimar H LeikeReimar H Leikehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/321Windows compatibility2021-05-05T17:08:08ZLukas PlatzWindows compatibilityA collaborator of mine just tried to install Nifty on a Windows machine in an Anaconda environment and had the problem that the symlink from `nifty7` to `src` was apparently breaking the setup. Once she removed it and renamed `src` to `n...A collaborator of mine just tried to install Nifty on a Windows machine in an Anaconda environment and had the problem that the symlink from `nifty7` to `src` was apparently breaking the setup. Once she removed it and renamed `src` to `nifty7`, the setup worked.
I have not checked if this is the general behavior under Windows or if it is just her setup, but assume it is the former.
Has anybody else experience with this and can weigh in?
What was the rationale behind changing the source location and introducing the symlink?
Cheers,
Lukashttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/324Update `demos/mgvi_visualized.py` to GeoVI2021-05-30T14:49:38ZPhilipp Arrasparras@mpa-garching.mpg.deUpdate `demos/mgvi_visualized.py` to GeoVINIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/322Update contributors list2021-05-30T17:28:48ZPhilipp Arrasparras@mpa-garching.mpg.deUpdate contributors listNIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.dehttps://gitlab.mpcdf.mpg.de/ift/nifty/-/issues/326Check NIFTy_7 documentation generation for error messages2021-05-30T17:30:29ZPhilipp Arrasparras@mpa-garching.mpg.deCheck NIFTy_7 documentation generation for error messagesNIFTy7 releasePhilipp Arrasparras@mpa-garching.mpg.dePhilipp Arrasparras@mpa-garching.mpg.de