Scikit Learn: Randomized Logistic Regression gives ValueError: output array is read-only

I try to fit Randomized Logistic Regression with my data and I cannot proceed. Here is the code:

import numpy as np    
X = np.load("X.npy")
y = np.load("y.npy")

randomized_LR = RandomizedLogisticRegression(C=0.1, verbose=True, n_jobs=3)
randomized_LR.fit(X, y)

This gives an error:

    344     if issparse(X):
    345         size = len(weights)
    346         weight_dia = sparse.dia_matrix((1 - weights, 0), (size, size))
    347         X = X * weight_dia
    348     else:
--> 349         X *= (1 - weights)
    350
    351     C = np.atleast_1d(np.asarray(C, dtype=np.float))
    352     scores = np.zeros((X.shape[1], len(C)), dtype=np.bool)
    353

ValueError: output array is read-only

Could someone point out what I should do to proceed please?

Thank u very much

Hendra

Complete Traceback as requested:

Traceback (most recent call last):
  File "temp.py", line 88, in <module>
  train_randomized_logistic_regression()
  File "temp.py", line 82, in train_randomized_logistic_regression
randomized_LR.fit(X, y)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 110, in fit
sample_fraction=self.sample_fraction, **params)
File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 281, in __call__
return self.func(*args, **kwargs)
File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 52, in _resample_model
for _ in range(n_resampling)):
File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 660, in __call__
self.retrieve()
File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 543, in retrieve
raise exception_type(report)
sklearn.externals.joblib.my_exceptions.JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in <module>()
     83
     84
     85
     86 if __name__ == '__main__':
     87
---> 88     train_randomized_logistic_regression()
     89
     90
     91
     92

...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in train_randomized_logistic_regression()
     77     X = np.load( 'data/issuemakers/features/new_X.npy')
     78     y = np.load( 'data/issuemakers/features/new_y.npy')
     79
     80     randomized_LR = RandomizedLogisticRegression(C=0.1, n_jobs=32)
     81
---> 82     randomized_LR.fit(X, y)
    randomized_LR.fit = <bound method RandomizedLogisticRegression.fit o...d=0.25,
           tol=0.001, verbose=False)>
    X = array([[  1.01014900e+06,   7.29970000e+04,   2....460000e+04,   3.11428571e+01,   1.88100000e+03]])
    y = array([1, 1, 1, ..., 0, 1, 1])
     83
     84
     85
     86 if __name__ == '__main__':

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in  fit(self=RandomizedLogisticRegression(C=0.1, fit_intercep...ld=0.25,
           tol=0.001, verbose=False), X=array([[  6.93135506e-04,   8.93676615e-04,    -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]))
    105         )(
    106             estimator_func, X, y,
    107             scaling=self.scaling, n_resampling=self.n_resampling,
    108             n_jobs=self.n_jobs, verbose=self.verbose,
    109             pre_dispatch=self.pre_dispatch, random_state=self.random_state,
--> 110             sample_fraction=self.sample_fraction, **params)
    self.sample_fraction = 0.75
    params = {'C': 0.1, 'fit_intercept': True, 'tol': 0.001}
    111
    112         if scores_.ndim == 1:
    113             scores_ = scores_[:, np.newaxis]
    114         self.all_scores_ = scores_

 ...........................................................................
 /home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py in __call__(self=NotMemorizedFunc(func=<function _resample_model at 0x7fb5d7d12b18>), *args=(<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1])), **kwargs={'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False})
    276     # Should be a light as possible (for speed)
    277     def __init__(self, func):
    278         self.func = func
    279
    280     def __call__(self, *args, **kwargs):
--> 281         return self.func(*args, **kwargs)
    self.func = <function _resample_model>
    args = (<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1]))
    kwargs = {'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False}
282
283     def call_and_shelve(self, *args, **kwargs):
284         return NotMemorizedResult(self.func(*args, **kwargs))
285

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in _resample_model(estimator_func=<function _randomized_logistic>, X=array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]), scaling=0.5, n_resampling=200, n_jobs=32, verbose=False, pre_dispatch='3*n_jobs', random_state=<mtrand.RandomState object>, sample_fraction=0.75, **params={'C': 0.1, 'fit_intercept': True, 'tol': 0.001})
     47                 X, y, weights=scaling * random_state.random_integers(
     48                     0, 1, size=(n_features,)),
     49                 mask=(random_state.rand(n_samples) < sample_fraction),
     50                 verbose=max(0, verbose - 1),
     51                 **params)
---> 52             for _ in range(n_resampling)):
    n_resampling = 200
     53         scores_ += active_set
     54
     55     scores_ /= n_resampling
     56     return scores_

 ...........................................................................
 /home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=32), iterable=<itertools.islice object>)
    655             if pre_dispatch == "all" or n_jobs == 1:
    656                 # The iterable was consumed all at once by the above for loop.
    657                 # No need to wait for async callbacks to trigger to
    658                 # consumption.
    659                 self._iterating = False
--> 660             self.retrieve()
    self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=32)>
    661             # Make sure that we get a last message telling us we are done
    662             elapsed_time = time.time() - self._start_time
    663             self._print('Done %3i out of %3i | elapsed: %s finished',
    664                         (len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError                                         Fri Jan  2 12:13:54 2015
PID: 126664                Python 2.7.8: /home/hbunyam1/anaconda/bin/python
...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.pyc in _randomized_logistic(X=memmap([[  6.93135506e-04,   8.93676615e-04,  -1...234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]), weights=array([ 0.5,  0. ,  0. ,  0.5,  0. ,  0.5,  0. ,...  0. ,  0. ,  0.5,  0. ,  0. ,  0. ,  0. ,  0.5]), mask=array([ True,  True,  True, ...,  True,  True,  True], dtype=bool), C=0.1, verbose=0, fit_intercept=True, tol=0.001)
    344     if issparse(X):
    345         size = len(weights)
    346         weight_dia = sparse.dia_matrix((1 - weights, 0), (size, size))
    347         X = X * weight_dia
    348     else:
--> 349         X *= (1 - weights)
    350
    351     C = np.atleast_1d(np.asarray(C, dtype=np.float))
    352     scores = np.zeros((X.shape[1], len(C)), dtype=np.bool)
    353

ValueError: output array is read-only
___________________________________________________________________________







Traceback (most recent call last):
  File "temp.py", line 88, in <module>
    train_randomized_logistic_regression()
  File "temp.py", line 82, in train_randomized_logistic_regression
    randomized_LR.fit(X, y)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 110, in fit
    sample_fraction=self.sample_fraction, **params)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 281, in __call__
    return self.func(*args, **kwargs)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 52, in _resample_model
    for _ in range(n_resampling)):
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 660, in __call__
    self.retrieve()
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 543, in retrieve
    raise exception_type(report)
sklearn.externals.joblib.my_exceptions.JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
    ...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in <module>()
     83
     84
     85
     86 if __name__ == '__main__':
     87
---> 88     train_randomized_logistic_regression()
     89
     90
     91
     92

...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in train_randomized_logistic_regression()
     77     X = np.load( 'data/issuemakers/features/new_X.npy')
     78     y = np.load( 'data/issuemakers/features/new_y.npy')
     79
     80     randomized_LR = RandomizedLogisticRegression(C=0.1, n_jobs=32)
     81
---> 82     randomized_LR.fit(X, y)
        randomized_LR.fit = <bound method RandomizedLogisticRegression.fit o...d=0.25,
               tol=0.001, verbose=False)>
        X = array([[  1.01014900e+06,   7.29970000e+04,   2....460000e+04,   3.11428571e+01,   1.88100000e+03]])
        y = array([1, 1, 1, ..., 0, 1, 1])
     83
     84
     85
     86 if __name__ == '__main__':

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in fit(self=RandomizedLogisticRegression(C=0.1, fit_intercep...ld=0.25,
               tol=0.001, verbose=False), X=array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]))
    105         )(
    106             estimator_func, X, y,
    107             scaling=self.scaling, n_resampling=self.n_resampling,
    108             n_jobs=self.n_jobs, verbose=self.verbose,
    109             pre_dispatch=self.pre_dispatch, random_state=self.random_state,
--> 110             sample_fraction=self.sample_fraction, **params)
        self.sample_fraction = 0.75
        params = {'C': 0.1, 'fit_intercept': True, 'tol': 0.001}
    111
    112         if scores_.ndim == 1:
    113             scores_ = scores_[:, np.newaxis]
    114         self.all_scores_ = scores_

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py in __call__(self=NotMemorizedFunc(func=<function _resample_model at 0x7fb5d7d12b18>), *args=(<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1])), **kwargs={'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False})
    276     # Should be a light as possible (for speed)
    277     def __init__(self, func):
    278         self.func = func
    279
    280     def __call__(self, *args, **kwargs):
--> 281         return self.func(*args, **kwargs)
        self.func = <function _resample_model>
        args = (<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1]))
        kwargs = {'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False}
    282
    283     def call_and_shelve(self, *args, **kwargs):
    284         return NotMemorizedResult(self.func(*args, **kwargs))
    285

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in _resample_model(estimator_func=<function _randomized_logistic>, X=array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]), scaling=0.5, n_resampling=200, n_jobs=32, verbose=False, pre_dispatch='3*n_jobs', random_state=<mtrand.RandomState object>, sample_fraction=0.75, **params={'C': 0.1, 'fit_intercept': True, 'tol': 0.001})
     47                 X, y, weights=scaling * random_state.random_integers(
     48                     0, 1, size=(n_features,)),
     49                 mask=(random_state.rand(n_samples) < sample_fraction),
     50                 verbose=max(0, verbose - 1),
     51                 **params)
---> 52             for _ in range(n_resampling)):
        n_resampling = 200
     53         scores_ += active_set
     54
     55     scores_ /= n_resampling
     56     return scores_

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=32), iterable=<itertools.islice object>)
    655             if pre_dispatch == "all" or n_jobs == 1:
    656                 # The iterable was consumed all at once by the above for loop.
    657                 # No need to wait for async callbacks to trigger to
    658                 # consumption.
    659                 self._iterating = False
--> 660             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=32)>
    661             # Make sure that we get a last message telling us we are done
    662             elapsed_time = time.time() - self._start_time
    663             self._print('Done %3i out of %3i | elapsed: %s finished',
    664                         (len(self._output),

    ---------------------------------------------------------------------------
    Sub-process traceback:
    ---------------------------------------------------------------------------
    ValueError                                         Fri Jan  2 12:13:54 2015
PID: 126664                Python 2.7.8: /home/hbunyam1/anaconda/bin/python
...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.pyc in _randomized_logistic(X=memmap([[  6.93135506e-04,   8.93676615e-04,  -1...234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]), weights=array([ 0.5,  0. ,  0. ,  0.5,  0. ,  0.5,  0. ,...  0. ,  0. ,  0.5,  0. ,  0. ,  0. ,  0. ,  0.5]), mask=array([ True,  True,  True, ...,  True,  True,  True], dtype=bool), C=0.1, verbose=0, fit_intercept=True, tol=0.001)
    344     if issparse(X):
    345         size = len(weights)
    346         weight_dia = sparse.dia_matrix((1 - weights, 0), (size, size))
    347         X = X * weight_dia
    348     else:
--> 349         X *= (1 - weights)
    350
    351     C = np.atleast_1d(np.asarray(C, dtype=np.float))
    352     scores = np.zeros((X.shape[1], len(C)), dtype=np.bool)
    353

ValueError: output array is read-only
___________________________________________________________________________
[hbunyam1@zookst20 social_graph]$ python temp.py
Traceback (most recent call last):
  File "temp.py", line 88, in <module>
    train_randomized_logistic_regression()
  File "temp.py", line 82, in train_randomized_logistic_regression
    randomized_LR.fit(X, y)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 110, in fit
    sample_fraction=self.sample_fraction, **params)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py", line 281, in __call__
    return self.func(*args, **kwargs)
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py", line 52, in _resample_model
    for _ in range(n_resampling)):
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 660, in __call__
    self.retrieve()
  File "/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 543, in retrieve
    raise exception_type(report)
sklearn.externals.joblib.my_exceptions.JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
    ...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in <module>()
     83
     84
     85
     86 if __name__ == '__main__':
     87
---> 88     train_randomized_logistic_regression()
     89
     90
     91
     92

...........................................................................
/zfs/ilps-plexest/homedirs/hbunyam1/social_graph/temp.py in train_randomized_logistic_regression()
     77     X = np.load( 'data/issuemakers/features/new_X.npy', mmap_mode='r+')
     78     y = np.load( 'data/issuemakers/features/new_y.npy', mmap_mode='r+')
     79
     80     randomized_LR = RandomizedLogisticRegression(C=0.1, n_jobs=32)
     81
---> 82     randomized_LR.fit(X, y)
        randomized_LR.fit = <bound method RandomizedLogisticRegression.fit o...d=0.25,
               tol=0.001, verbose=False)>
        X = memmap([[  1.01014900e+06,   7.29970000e+04,   2...460000e+04,   3.11428571e+01,   1.88100000e+03]])
        y = memmap([1, 1, 1, ..., 0, 1, 1])
     83
     84
     85
     86 if __name__ == '__main__':

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in fit(self=RandomizedLogisticRegression(C=0.1, fit_intercep...ld=0.25,
               tol=0.001, verbose=False), X=array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]))
    105         )(
    106             estimator_func, X, y,
    107             scaling=self.scaling, n_resampling=self.n_resampling,
    108             n_jobs=self.n_jobs, verbose=self.verbose,
    109             pre_dispatch=self.pre_dispatch, random_state=self.random_state,
--> 110             sample_fraction=self.sample_fraction, **params)
        self.sample_fraction = 0.75
        params = {'C': 0.1, 'fit_intercept': True, 'tol': 0.001}
    111
    112         if scores_.ndim == 1:
    113             scores_ = scores_[:, np.newaxis]
    114         self.all_scores_ = scores_

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/memory.py in __call__(self=NotMemorizedFunc(func=<function _resample_model at 0x7f192c829b18>), *args=(<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1])), **kwargs={'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False})
    276     # Should be a light as possible (for speed)
    277     def __init__(self, func):
    278         self.func = func
    279
    280     def __call__(self, *args, **kwargs):
--> 281         return self.func(*args, **kwargs)
        self.func = <function _resample_model>
        args = (<function _randomized_logistic>, array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), array([1, 1, 1, ..., 0, 1, 1]))
        kwargs = {'C': 0.1, 'fit_intercept': True, 'n_jobs': 32, 'n_resampling': 200, 'pre_dispatch': '3*n_jobs', 'random_state': None, 'sample_fraction': 0.75, 'scaling': 0.5, 'tol': 0.001, 'verbose': False}
    282
    283     def call_and_shelve(self, *args, **kwargs):
    284         return NotMemorizedResult(self.func(*args, **kwargs))
    285

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.py in _resample_model(estimator_func=<function _randomized_logistic>, X=array([[  6.93135506e-04,   8.93676615e-04,  -1....234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=array([1, 1, 1, ..., 0, 1, 1]), scaling=0.5, n_resampling=200, n_jobs=32, verbose=False, pre_dispatch='3*n_jobs', random_state=<mtrand.RandomState object>, sample_fraction=0.75, **params={'C': 0.1, 'fit_intercept': True, 'tol': 0.001})
     47                 X, y, weights=scaling * random_state.random_integers(
     48                     0, 1, size=(n_features,)),
     49                 mask=(random_state.rand(n_samples) < sample_fraction),
     50                 verbose=max(0, verbose - 1),
     51                 **params)
---> 52             for _ in range(n_resampling)):
        n_resampling = 200
     53         scores_ += active_set
     54
     55     scores_ /= n_resampling
     56     return scores_

...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py in __call__(self=Parallel(n_jobs=32), iterable=<itertools.islice object>)
    655             if pre_dispatch == "all" or n_jobs == 1:
    656                 # The iterable was consumed all at once by the above for loop.
    657                 # No need to wait for async callbacks to trigger to
    658                 # consumption.
    659                 self._iterating = False
--> 660             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=32)>
    661             # Make sure that we get a last message telling us we are done
    662             elapsed_time = time.time() - self._start_time
    663             self._print('Done %3i out of %3i | elapsed: %s finished',
    664                         (len(self._output),

    ---------------------------------------------------------------------------
    Sub-process traceback:
    ---------------------------------------------------------------------------
    ValueError                                         Fri Jan  2 12:57:25 2015
PID: 127177                Python 2.7.8: /home/hbunyam1/anaconda/bin/python
...........................................................................
/home/hbunyam1/anaconda/lib/python2.7/site-packages/sklearn/linear_model/randomized_l1.pyc in _randomized_logistic(X=memmap([[  6.93135506e-04,   8.93676615e-04,  -1...234095e-04,  -1.19037488e-04,   4.20921021e-04]]), y=memmap([1, 1, 1, ..., 0, 0, 1]), weights=array([ 0.5,  0.5,  0. ,  0.5,  0.5,  0.5,  0.5,...  0. ,  0.5,  0. ,  0. ,  0.5,  0.5,  0.5,  0.5]), mask=array([ True,  True,  True, ..., False, False,  True], dtype=bool), C=0.1, verbose=0, fit_intercept=True, tol=0.001)
    344     if issparse(X):
    345         size = len(weights)
    346         weight_dia = sparse.dia_matrix((1 - weights, 0), (size, size))
    347         X = X * weight_dia
    348     else:
--> 349         X *= (1 - weights)
    350
    351     C = np.atleast_1d(np.asarray(C, dtype=np.float))
    352     scores = np.zeros((X.shape[1], len(C)), dtype=np.bool)
    353

ValueError: output array is read-only
___________________________________________________________________________

I have received the same error when running the function on a 32 processor Ubuntu Server. While the problem persisted on n_jobs values above 1, it went away when setting the n_jobs value to the default i.e. 1. [as benbo described it]

This is a bug in the RandomizedLogisticRegression, where multiple accesses in memory to the same object block prevent each other from accessing it.

Please refer to the sklearn github page, they address this issue and possible fixes in depth: https://github.com/scikit-learn/scikit-learn/issues/4597

LassoLarsCV error if n_jobs > 1 · Issue #4597 · scikit-learn/scikit , n,p = 2194, 12277 X = np.random.normal(0, 1, (n, p)) y = X[:,1] + X[:,2] + This is actually a Cython issue, there are no read-only cython buffers, with multiple blocks dataframes as the array substitution by readonly ogrisel added a commit to ogrisel/scikit-learn that referenced this issue on May 7, 2015. Logistic Regression (aka logit, MaxEnt) classifier. In the multiclass case, the training algorithm uses the one-vs-rest (OvR) scheme if the ‘multi_class’ option is set to ‘ovr’, and uses the cross-entropy loss if the ‘multi_class’ option is set to ‘multinomial’. (Currently the ‘multinomial’ option is supported only by the

The reason is the max_nbytes parameter within the Parallel invocation of the Joblib-library used by Scikit-learn internally when you set n_jobs>1, which is 1M by default. The definition of this parameter is:

Threshold on the size of arrays passed to the workers that triggers automated memory mapping in temp_folder.

More details can be found here: https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html#

So, once the arrays pass the size of 1M, joblib will throw the error ValueError: assignment destination is read-only. This error is easy to replicate. Let's look at the following code:

import numpy as np
from sklearn.linear_model import RandomizedLogisticRegression
# Create some random data
samples = 2621
X = np.random.randint(1,100, size=(samples, 50))
y = np.random.randint(100,200, size=(samples))

randomized_LR = RandomizedLogisticRegression(C=0.1, verbose=True, n_jobs=3)
randomized_LR.fit(X, y)

This will run without any problems and if we look at the size of X, by using print(X.nbytes/1024**2), this will show us that the X-array is 0.9998321533203125Megabyte and thus not too big.

If we run the same code again, but change the number of samples to 2622:

import numpy as np
from sklearn.linear_model import RandomizedLogisticRegression

samples = 2622
X = np.random.randint(1,100, size=(samples, 50))
print(X.nbytes/1024**2)
y = np.random.randint(100,200, size=(samples))

randomized_LR = RandomizedLogisticRegression(C=0.1, verbose=True, n_jobs=3)
randomized_LR.fit(X, y)

Python crashes with ValueError: output array is read-only, checking the size of the X-array will tell us that it is 1.000213623046875Megabyte and thus too big.

Issue #5481 · scikit-learn/scikit-learn, Estimators failing on read only memory maps : the whole PLS family Facto. and was working on robust scaler, and when I tried to import it, it gave me We also need to test on sparse arrays, probably of multiple types ValueError: output array is read-only when using n_jobs > 1 with RandomizedLasso. I need to implement GPR (Gaussian process regression) in Python using the scikit-learn library. My input X has two features. Ex. X=[x1, x2]. And output is one dimension y=[y1] I want to use two Kernels; RBF and Matern, such that RBF uses the 'x1' feature while Matern use the 'x2' feature. I tried the following:

You may have to use np.load('X.npy', mmap_mode='r+') as per the documentation of numpy.load.

sklearn.linear_model.LogisticRegression, This class implements regularized logistic regression using the 'liblinear' library, Use C-ordered arrays or CSR matrices containing 64-bit floats for optimal performance; any The Elastic-Net regularization is only supported by the 'saga' solver. Predict output may not match that of standalone liblinear in certain cases. Please look at the documentation of cross-validation at scikit to understand it more.. Also you are using cross_val_predict incorrectly. What it will do is internally call the cv you supplied (cv=10) to split the supplied data (i.e. X_train, t_train in your case) into again train and test, fit the estimator on train and predict on data which remains in test.

Try changing the number of jobs, maybe to 1 for a start. I ran into the same error when running RandomizedLogisticRegression with n_jobs=20 (on a powerful machine). However, the code ran without any problems when n_jobs was set to the default 1.

[PDF] scikit-learn user guide, 7.28 sklearn.multioutput: Multioutput regression and classification . For instance to train a classifier, all you need is a 2D array X for the relies on the numpy global random state, which can be set using numpy.random.seed. read-​only error on large dataframes with n_jobs>1 as reported in #15810. X array-like of shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y array-like of shape (n_samples, n_output) or (n_samples,), default=None. Target relative to X for classification or regression; None for unsupervised learning.

Version 0.22.2.post1, Fix Fix a bug which converted a list of arrays into a 2-D object array Also avoid casting the data as object dtype and avoid read-only error on RandomizedSearchCV accept scalar values provided in fit_params . Efficiency The 'liblinear' logistic regression solver is now faster and requires less memory. Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features. rank_ int. Rank of matrix X. Only available when X is dense. singular_ array of shape (min(X, y),)

3.2.4.1.5. sklearn.linear_model.LogisticRegressionCV, This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with If int, random_state is the seed used by the random number generator; Array of C i.e. inverse of regularization parameter values used for cross-​validation. The F-beta score weights recall more than precision by a factor of beta. beta == 1.0 means recall and precision are equally important. The support is the number of occurrences of each class in y_true. If pos_label is None and in binary classification, this function returns the average precision, recall and F-measure if average is one of 'micro

Developing scikit-learn estimators, The main objects in scikit-learn are (one class can implement multiple interfaces): A model that can give a goodness of fit measure or a likelihood of unseen data, which can be one array in the case of unsupervised learning, or two arrays in the whether estimator supports only multi-output classification or regression. Read more in the User Guide. Ground truth (correct) target values. Estimated targets as returned by a classifier. Optional list of label indices to include in the report. Optional display names matching the labels (same order). Number of digits for formatting output floating point values. When output_dict is True, this will be ignored and the

Comments
  • Thanks; however, the error still pops out and is the same.
  • @HendraBunyamin can you post the full traceback?
  • I'm confused; did you try X = np.load( 'data/issuemakers/features/new_X.npy', mmap_mode='r+')?
  • Sorry I didn't do the mmap_mode='r+' and I post again with the mmap_mode. Thank You