Hot questions for Using Neural networks in spyder

Question:

I am learning Deep Learning myself and am facing issues while performing ANN. Here's what I am doing:

Initializing the ANN (I've split the dataset beforehand):

classifier = Sequential()

Adding the input layer and the first hidden layer:

classifier.add(Dense(input_dim = 11, kernel_initializer = 'uniform', activation = 'relu', units = 6))

Adding the second hidden layer:

classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))

Adding the output layer:

classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))

Compiling the ANN by employing Stochastic gradient descent:

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

After this, when I select and run the last command, I get an error that reads:

TypeError: sigmoid_cross_entropy_with_logits() got an unexpected keyword argument 'labels'

I noticed when I use loss = mean_squared_error, it compiles fine. Can you tell me what's going on?

Sypder and Python latest as on the day I am posting this. Windows 10. Thanos, TensorFlow and Keras latest

Thanks in advance.


Answer:

Update your tensorflow version with a nightly build:

https://github.com/tensorflow/tensorflow#installation

see this issue:https://github.com/carpedm20/DCGAN-tensorflow/issues/84

Question:

Following example is from here. This is an example training a GANs.

# Deep Convolutional GANs

# Importing the libraries
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable

# Setting some hyperparameters
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).

# Creating the transformations
transform = transforms.Compose([transforms.Scale(imageSize), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.

# Loading the dataset
dataset = dset.CIFAR10(root = './data', download = True, transform = transform) # We download the training set in the ./data folder and we apply the previous transformations on each image.
dataloader = torch.utils.data.DataLoader(dataset, batch_size = batchSize, shuffle = True, num_workers = 2) # We use dataLoader to get the images of the training set batch by batch.

# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
    classname = m.__class__.__name__
    if classname.find('Conv') != -1:
        m.weight.data.normal_(0.0, 0.02)
    elif classname.find('BatchNorm') != -1:
        m.weight.data.normal_(1.0, 0.02)
        m.bias.data.fill_(0)

# Defining the generator

class G(nn.Module): # We introduce a class to define the generator.

    def __init__(self): # We introduce the __init__() function that will define the architecture of the generator.
        super(G, self).__init__() # We inherit from the nn.Module tools.
        self.main = nn.Sequential( # We create a meta module of a neural network that will contain a sequence of modules (convolutions, full connections, etc.).
            nn.ConvTranspose2d(100, 512, 4, 1, 0, bias = False), # We start with an inversed convolution.
            nn.BatchNorm2d(512), # We normalize all the features along the dimension of the batch.
            nn.ReLU(True), # We apply a ReLU rectification to break the linearity.
            nn.ConvTranspose2d(512, 256, 4, 2, 1, bias = False), # We add another inversed convolution.
            nn.BatchNorm2d(256), # We normalize again.
            nn.ReLU(True), # We apply another ReLU.
            nn.ConvTranspose2d(256, 128, 4, 2, 1, bias = False), # We add another inversed convolution.
            nn.BatchNorm2d(128), # We normalize again.
            nn.ReLU(True), # We apply another ReLU.
            nn.ConvTranspose2d(128, 64, 4, 2, 1, bias = False), # We add another inversed convolution.
            nn.BatchNorm2d(64), # We normalize again.
            nn.ReLU(True), # We apply another ReLU.
            nn.ConvTranspose2d(64, 3, 4, 2, 1, bias = False), # We add another inversed convolution.
            nn.Tanh() # We apply a Tanh rectification to break the linearity and stay between -1 and +1.
        )

    def forward(self, input): # We define the forward function that takes as argument an input that will be fed to the neural network, and that will return the output containing the generated images.
        output = self.main(input) # We forward propagate the signal through the whole neural network of the generator defined by self.main.
        return output # We return the output containing the generated images.

# Creating the generator
netG = G() # We create the generator object.
netG.apply(weights_init) # We initialize all the weights of its neural network.

# Defining the discriminator

class D(nn.Module): # We introduce a class to define the discriminator.

    def __init__(self): # We introduce the __init__() function that will define the architecture of the discriminator.
        super(D, self).__init__() # We inherit from the nn.Module tools.
        self.main = nn.Sequential( # We create a meta module of a neural network that will contain a sequence of modules (convolutions, full connections, etc.).
            nn.Conv2d(3, 64, 4, 2, 1, bias = False), # We start with a convolution.
            nn.LeakyReLU(0.2, inplace = True), # We apply a LeakyReLU.
            nn.Conv2d(64, 128, 4, 2, 1, bias = False), # We add another convolution.
            nn.BatchNorm2d(128), # We normalize all the features along the dimension of the batch.
            nn.LeakyReLU(0.2, inplace = True), # We apply another LeakyReLU.
            nn.Conv2d(128, 256, 4, 2, 1, bias = False), # We add another convolution.
            nn.BatchNorm2d(256), # We normalize again.
            nn.LeakyReLU(0.2, inplace = True), # We apply another LeakyReLU.
            nn.Conv2d(256, 512, 4, 2, 1, bias = False), # We add another convolution.
            nn.BatchNorm2d(512), # We normalize again.
            nn.LeakyReLU(0.2, inplace = True), # We apply another LeakyReLU.
            nn.Conv2d(512, 1, 4, 1, 0, bias = False), # We add another convolution.
            nn.Sigmoid() # We apply a Sigmoid rectification to break the linearity and stay between 0 and 1.
        )

    def forward(self, input): # We define the forward function that takes as argument an input that will be fed to the neural network, and that will return the output which will be a value between 0 and 1.
        output = self.main(input) # We forward propagate the signal through the whole neural network of the discriminator defined by self.main.
        return output.view(-1) # We return the output which will be a value between 0 and 1.

# Creating the discriminator
netD = D() # We create the discriminator object.
netD.apply(weights_init) # We initialize all the weights of its neural network.

# Training the DCGANs

criterion = nn.BCELoss() # We create a criterion object that will measure the error between the prediction and the target.
optimizerD = optim.Adam(netD.parameters(), lr = 0.0002, betas = (0.5, 0.999)) # We create the optimizer object of the discriminator.
optimizerG = optim.Adam(netG.parameters(), lr = 0.0002, betas = (0.5, 0.999)) # We create the optimizer object of the generator.

for epoch in range(25): # We iterate over 25 epochs.

    for i, data in enumerate(dataloader, 0): # We iterate over the images of the dataset.

        # 1st Step: Updating the weights of the neural network of the discriminator

        netD.zero_grad() # We initialize to 0 the gradients of the discriminator with respect to the weights.

        # Training the discriminator with a real image of the dataset
        real, _ = data # We get a real image of the dataset which will be used to train the discriminator.
        input = Variable(real) # We wrap it in a variable.
        target = Variable(torch.ones(input.size()[0])) # We get the target.
        output = netD(input) # We forward propagate this real image into the neural network of the discriminator to get the prediction (a value between 0 and 1).
        errD_real = criterion(output, target) # We compute the loss between the predictions (output) and the target (equal to 1).

        # Training the discriminator with a fake image generated by the generator
        noise = Variable(torch.randn(input.size()[0], 100, 1, 1)) # We make a random input vector (noise) of the generator.
        fake = netG(noise) # We forward propagate this random input vector into the neural network of the generator to get some fake generated images.
        target = Variable(torch.zeros(input.size()[0])) # We get the target.
        output = netD(fake.detach()) # We forward propagate the fake generated images into the neural network of the discriminator to get the prediction (a value between 0 and 1).
        errD_fake = criterion(output, target) # We compute the loss between the prediction (output) and the target (equal to 0).

        # Backpropagating the total error
        errD = errD_real + errD_fake # We compute the total error of the discriminator.
        errD.backward() # We backpropagate the loss error by computing the gradients of the total error with respect to the weights of the discriminator.
        optimizerD.step() # We apply the optimizer to update the weights according to how much they are responsible for the loss error of the discriminator.

        # 2nd Step: Updating the weights of the neural network of the generator

        netG.zero_grad() # We initialize to 0 the gradients of the generator with respect to the weights.
        target = Variable(torch.ones(input.size()[0])) # We get the target.
        output = netD(fake) # We forward propagate the fake generated images into the neural network of the discriminator to get the prediction (a value between 0 and 1).
        errG = criterion(output, target) # We compute the loss between the prediction (output between 0 and 1) and the target (equal to 1).
        errG.backward() # We backpropagate the loss error by computing the gradients of the total error with respect to the weights of the generator.
        optimizerG.step() # We apply the optimizer to update the weights according to how much they are responsible for the loss error of the generator.

        # 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps

        print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (epoch, 25, i, len(dataloader), errD.data[0], errG.data[0])) # We print les losses of the discriminator (Loss_D) and the generator (Loss_G).
        if i % 100 == 0: # Every 100 steps:
            vutils.save_image(real, '%s/real_samples.png' % "./results", normalize = True) # We save the real images of the minibatch.
            fake = netG(noise) # We get our fake generated images.
            vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize = True) # We also save the fake generated images of the minibatch.

However, when I execute this example, it returns error

BrokenPipeError: [Errno 32] Broken pipe

which seems to be happening at line

for i, data in enumerate(dataloader, 0): # We iterate over the images of the dataset.

Here is the entire traceback:

runfile('C:/Users/ncui/Dropbox/JuJu/Base_projects/Udemy/Computer_Vision_A_Z/Module 3 - GANs/dcgan_commented.py', wdir='C:/Users/ncui/Dropbox/JuJu/Base_projects/Udemy/Computer_Vision_A_Z/Module 3 - GANs')
Files already downloaded and verified
Traceback (most recent call last):

  File "<ipython-input-4-a3a7a503f14c>", line 1, in <module>
    runfile('C:/Users/ncui/Dropbox/JuJu/Base_projects/Udemy/Computer_Vision_A_Z/Module 3 - GANs/dcgan_commented.py', wdir='C:/Users/ncui/Dropbox/JuJu/Base_projects/Udemy/Computer_Vision_A_Z/Module 3 - GANs')

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile
    execfile(filename, namespace)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/ncui/Dropbox/JuJu/Base_projects/Udemy/Computer_Vision_A_Z/Module 3 - GANs/dcgan_commented.py", line 104, in <module>
    for i, data in enumerate(dataloader, 0): # We iterate over the images of the dataset.

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
    return _DataLoaderIter(self)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
    w.start()

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)

  File "C:\Users\ncui\AppData\Local\Continuum\anaconda3\envs\tensorflow\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)

BrokenPipeError: [Errno 32] Broken pipe

I tried to do step by step, but I cannot see what dataloader, i , and data from my variable explorer, which I don't quite understand.

I am on windows 7, python 3.6, and using spyder as python IDE. The data used in this script can be found here.

Anyone can give some pointers on

  • how to fix this error?
  • why this error is happening?
  • why I cannot see dataloader, i , and data from my variable explorer
  • how can I see what dataloader, i , and data are if possible
  • any other useful information.

Thanks a lot.


Answer:

Adding

if __name__ == "__main__":

in front of the first for loop.

Question:

I'm using XGBoost for feature importance, I want to select the features that give me the 90 % of importance, so at first I build a Dataframe beacause I need it for excel and then I write a while cycle to evalutate the features that give me 90% of importances. After this there is a neural network (but it isn't in the code below). I know that maybe there are some easiest way to do this but it gives me an error:

ValueError: could not convert string to float: '0,25691372'

The code is

  import pandas as pd
import numpy as np

from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
from sklearn import preprocessing

from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor
from matplotlib import pyplot as plt


dataset = pd.read_csv('CompleteDataSet_original_Clean_CONC.csv', decimal=',', delimiter = ";")
from sklearn.metrics import r2_score

label = dataset.iloc[:,-1]
features = dataset.drop(columns = ['Label'])
y_max_pre_normalize = max(label)
y_min_pre_normalize = min(label)

def denormalize(y):
    final_value = y*(y_max_pre_normalize-y_min_pre_normalize)+y_min_pre_normalize
    return final_value
X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = 0.20, random_state = 1, shuffle = True)

y_test2 = y_test1.to_frame()
y_train2 = y_train1.to_frame()

scaler1 = preprocessing.MinMaxScaler()
scaler2 = preprocessing.MinMaxScaler()
X_train = scaler1.fit_transform(X_train1)
X_test = scaler2.fit_transform(X_test1)


scaler3 = preprocessing.MinMaxScaler()
scaler4 = preprocessing.MinMaxScaler()
y_train = scaler3.fit_transform(y_train2)
y_test = scaler4.fit_transform(y_test2)


sel = XGBRegressor(colsample_bytree= 0.7, learning_rate = 0.005, max_depth = 5, min_child_weight = 3, n_estimators = 1000)
sel.fit(X_train, y_train)
importances = sel.feature_importances_

importances = [str(i) for i in importances]

importances = [i.replace(".", ",") for i in importances]

df1 = pd.DataFrame(features.columns)
df1.columns = ['Features']
df2 = pd.DataFrame(importances)
df2.columns = ['Importances [%]']
result = pd.concat([df1,df2],axis = 1)
result = result.sort_values(by='Importances [%]', ascending=False)

result.to_excel("Feature_Results.xlsx") 

i = 0
somma = 0
feature = []
while somma <=0.9:
    a = result.iloc[i,-1]
    somma = float(a) + somma
    feature.append(result.iloc[i,-2])
    i = i + 1

Answer:

You could use locale.atof() to handle , being used as the decimal separator.

import locale
locale.setlocale(locale.LC_ALL, 'fr_FR')
...
    somma = locale.atof(a) + somma

Question:

I'm new in neural network. I know that during the validation/testing time the dropout must be to turn off because dropout makes neurons output 'wrong' values on purpose. So it is better in order to have a good result in term of accuracy. How can I do it in my code? THe dataset is composed by 18 features, 1 label and it is a regression problem.

import pandas as pd
import numpy as np

from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
from sklearn import preprocessing

from sklearn.model_selection import train_test_split

from matplotlib import pyplot as plt

import keras
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, BatchNormalization
from tensorflow.keras.callbacks import EarlyStopping
from keras import optimizers
from sklearn.metrics import r2_score
from keras import regularizers
from keras import backend
from tensorflow.keras import regularizers
from keras.regularizers import l2

# =============================================================================
# Scelgo il test size
# =============================================================================
test_size = 0.2

dataset = pd.read_csv('DataSet.csv', decimal=',', delimiter = ";")

label = dataset.iloc[:,-1]
features = dataset.drop(columns = ['Label'])

y_max_pre_normalize = max(label)
y_min_pre_normalize = min(label)

def denormalize(y):
    final_value = y*(y_max_pre_normalize-y_min_pre_normalize)+y_min_pre_normalize
    return final_value

# =============================================================================
# Split
# =============================================================================

X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = test_size, shuffle = True)

y_test2 = y_test1.to_frame()
y_train2 = y_train1.to_frame()

# =============================================================================
# Normalizzo
# =============================================================================
scaler1 = preprocessing.MinMaxScaler()
scaler2 = preprocessing.MinMaxScaler()
X_train = scaler1.fit_transform(X_train1)
X_test = scaler2.fit_transform(X_test1)


scaler3 = preprocessing.MinMaxScaler()
scaler4 = preprocessing.MinMaxScaler()
y_train = scaler3.fit_transform(y_train2)
y_test = scaler4.fit_transform(y_test2)



# =============================================================================
# Creo la rete
# =============================================================================
optimizer = tf.keras.optimizers.Adam(lr=0.001)
model = Sequential()

model.add(Dense(60, input_shape = (X_train.shape[1],), activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(60, activation = 'relu',kernel_initializer='glorot_uniform'))
model.add(Dropout(0.2))
model.add(Dense(60, activation = 'relu',kernel_initializer='glorot_uniform'))

model.add(Dense(1,activation = 'linear',kernel_initializer='glorot_uniform'))

model.compile(loss = 'mse', optimizer = optimizer, metrics = ['mse'])

history = model.fit(X_train, y_train, epochs = 100,
                    validation_split = 0.1, shuffle=True, batch_size=250
                    )

history_dict = history.history

loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']

y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)

y_train_pred = denormalize(y_train_pred)
y_test_pred = denormalize(y_test_pred)


plt.figure()
plt.plot((y_test1),(y_test_pred),'.', color='darkviolet', alpha=1, marker='o', markersize = 2, markeredgecolor = 'black', markeredgewidth = 0.1)
plt.plot((np.array((-0.1,7))),(np.array((-0.1,7))),'-', color='magenta')
plt.xlabel('True')
plt.ylabel('Predicted')
plt.title('Test')

plt.figure()
plt.plot((y_train1),(y_train_pred),'.', color='darkviolet', alpha=1, marker='o', markersize = 2, markeredgecolor = 'black', markeredgewidth = 0.1)
plt.plot((np.array((-0.1,7))),(np.array((-0.1,7))),'-', color='magenta')
plt.xlabel('True')
plt.ylabel('Predicted')
plt.title('Train')

plt.figure()
plt.plot(loss_values,'b',label = 'training loss')
plt.plot(val_loss_values,'r',label = 'val training loss')
plt.xlabel('Epochs')
plt.ylabel('Loss Function')
plt.legend()

print("\n\nThe R2 score on the test set is:\t{:0.3f}".format(r2_score(y_test_pred, y_test1)))

print("The R2 score on the train set is:\t{:0.3f}".format(r2_score(y_train_pred, y_train1)))
from sklearn import metrics

# Measure MSE error.  
score = metrics.mean_squared_error(y_test_pred,y_test1)
print("\n\nFinal score test (MSE): %0.4f" %(score))
score1 = metrics.mean_squared_error(y_train_pred,y_train1)
print("Final score train (MSE): %0.4f" %(score1))
score2 = np.sqrt(metrics.mean_squared_error(y_test_pred,y_test1))
print(f"Final score test (RMSE): %0.4f" %(score2))
score3 = np.sqrt(metrics.mean_squared_error(y_train_pred,y_train1))
print(f"Final score train (RMSE): %0.4f" %(score3))

Answer:

Tensorflow, Keras and other deep learning libraries takes care of it behind the scene. One doesn't have to explicilty remove dropout for inference. The dropout would be effective only in training phase. Also, dropout just drops out neurons, along with incoming and outgoing connections, randomly in the respective layer on every iteration. And the purpose is to avoid overfitting. It has nothing to do with correct or wrong output of layers.

Question:

Good morning! I'm new of python, I use Spyder 4.0 to build neural network. In the script below I use the random forest in order to do feature importances. So the values importances are the ones that tell me what is the importance of each features. Unfortunatly I can't upload the dataset, but I can tell you that there are 18 features and 1 label, both are phisical quantyties and it's a regression problem. I want to export in a excel file the variable importances, but when I do it (simply cooping the vector) the numbers are with the dot (eg 0.012, 0.015, .....ect). In order to use it in the excel file I prefere to have the comma instead of the dot. I try to use .replace('.',',') but it doesn't works, the error is:

AttributeError: 'numpy.ndarray' object has no attribute 'replace'

It think that it happens because the vector importances is an Array of float64 (18,). What can I do?

Thanks.`

    import pandas as pd
import numpy as np

from sklearn.ensemble import RandomForestRegressor
from sklearn.feature_selection import SelectFromModel
from sklearn import preprocessing

from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt

dataset = pd.read_csv('Dataset.csv', decimal=',', delimiter = ";")


label = dataset.iloc[:,-1]
features = dataset.drop(columns = ['Label'])
y_max_pre_normalize = max(label)
y_min_pre_normalize = min(label)

def denormalize(y):
    final_value = y*(y_max_pre_normalize-y_min_pre_normalize)+y_min_pre_normalize
    return final_value

X_train1, X_test1, y_train1, y_test1 = train_test_split(features, label, test_size = 0.20, shuffle = True)

y_test2 = y_test1.to_frame()
y_train2 = y_train1.to_frame()

scaler1 = preprocessing.MinMaxScaler()
scaler2 = preprocessing.MinMaxScaler()
X_train = scaler1.fit_transform(X_train1)
X_test = scaler2.fit_transform(X_test1)


scaler3 = preprocessing.MinMaxScaler()
scaler4 = preprocessing.MinMaxScaler()
y_train = scaler3.fit_transform(y_train2)
y_test = scaler4.fit_transform(y_test2)


sel = RandomForestRegressor(n_estimators = 200,max_depth = 9, max_features = 5, min_samples_leaf = 1, min_samples_split = 2,bootstrap = False)
sel.fit(X_train, y_train)
importances = sel.feature_importances_

# sel.fit(X_train, y_train)
# a = []
# for feature_list_index in sel.get_support(indices=True):
#     a.append(feat_labels[feature_list_index])
#     print(feat_labels[feature_list_index])

# X_important_train = sel.transform(X_train1)
# X_important_test = sel.transform(X_test1)

Answer:

I will try to show you an example of what you should do by using some random values. I ran this on the python shell that's why you see also the ">>>".

>>> import numpy as np  # first I import numpy as "np"
# I generate 10 random values and I store them in "importance"
>>> importance=np.random.rand(10)
# here I just want to see the content of "importance"
>>> importance
array([0.77609076, 0.97746829, 0.56946118, 0.23986983, 0.93655692,
       0.22003531, 0.7711095 , 0.36083248, 0.58277805, 0.57865248])
# here there is your error that I reproduce for teaching purpose
>>>importance.replace(".", ",")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'numpy.ndarray' object has no attribute 'replace'

What you need to to is to convert the elements of "importance" to a list of strings

>>> imp_astr=[str(i) for i in importance]
>>> imp_astr
['0.7760907642658763', '0.9774682868805988', '0.569461184647781', '0.23986982589422634', '0.9365569207431337', '0.22003531170279356', '0.7711094966708247', '0.3608324767276052', '0.5827780487688116', '0.5786524781334242']
# at the end, for each string, you can use the "replace" function
>>> imp_astr=[i.replace(".", ",") for i in imp_astr]
>>> imp_astr
['0,7760907642658763', '0,9774682868805988', '0,569461184647781', '0,23986982589422634', '0,9365569207431337', '0,22003531170279356', '0,7711094966708247', '0,3608324767276052', '0,5827780487688116', '0,5786524781334242']