Questions tagged [hyperparameters]

1

votes
0

answer
195

Views

Predictive Maintenance - How to use Bayesian Optimization with objective function and Logistic Regression with Gradient Descent together?

I'm trying to reproduce the problem shown in arimo.com This is an example how to build a preventive maintenance Machine Learning model for an Hard Drive failures. The section I really don't understand is how to use Bayesian Optimization with a custom objective function and Logistic Regression with G...
1

votes
1

answer
306

Views

Using GridsearchCV () with holdout validation

GridsearchCV () has an argument cv whose value by default is 3 means that it is 3fold. Is there any way to use Gridsearch() with a holdout validation scheme. For example 80-20% split???
Khan
1

votes
0

answer
265

Views

How to get confusion matrix for best_estimator of GridSearchCV

I'm doing parameter tuning for a RandomForestClassifier using GridSearchCV. For evaluation purposes I want a confusion matrix for the best_estimator which is not saved by GridSearchCV as far as I know. gs = GridSearchCV(RandomForestClassifier(n_estimators=1000, random_state=42), param_grid={'max_dep...
Christian
1

votes
0

answer
45

Views

Score on the test set is higher than on the training set, wrong approach?

I have a relatively small, imbalanced data set (~3k datapoints, 12 classes). I want to tune the parameters of a RandomForestClassifier and eventually test the model. Currently I'm doing it like this, but strangely it yields higher scores on the test set than on the training set (I used cohen_kappa_...
Christian
1

votes
0

answer
253

Views

GridSearch with XGBoost producing Depreciation error on infinite loop

I am trying to do a hyperparameter tuning using GridSearchCV on XGBoost.But, I'm getting the following error. /usr/local/lib/python3.6/dist-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result...
AJITH CHACKO
1

votes
1

answer
347

Views

Hyperparameter optimization for Neural Network written in keras

Is there a python3 library that optimizes KERAS NN hyperparameters on GPU? I have tried using sklearn with KerasClassifier wrapper, but it uses cpu.
Akanksha
1

votes
1

answer
979

Views

Retrieve parameters from load model in xgboost

I have made a classification model, which has been saved using bst.save_model('final_model.model') in another file i load the model and do testing on my testdata using: bst = xgb.Booster() # init model bst.load_model('final_model.model') # load data ypred = bst.predict(dtest) # make prediction Si...
CuriousGeorge
1

votes
0

answer
27

Views

Same prediction for different hyperparameter in Keras

I'm trying to perform a hyperparameter optimization on a neural net, but as soon as I try a larger number of hidden layers, my neural network will always predict the same output, so my list of (negative) losses looks like: -0.789302627913455 -0.789302627913455 -0.789302627913455 -1 -0.78930262791345...
shup
1

votes
1

answer
183

Views

Running h2o Grid search on R

I am running h2o grid search on R. The model is a glm using a gamma distribution. I have defined the grid using the following settings. hyper_parameters = list(alpha = c(0, .5), missing_values_handling = c('Skip', 'MeanImputation')) h2o.grid(algorithm = 'glm', # Setting al...
1

votes
0

answer
37

Views

Sklearn GridSearchCV Delay

I am using this code to do a grid search grid_search = GridSearchCV(gbm, param_grid, scoring='neg_mean_squared_error', n_jobs=-1, cv=predefined_split, verbose=2) grid_result = grid_search.fit(df[features], df[target]) # summarize results print('Best: %f using %s' % (grid_result.best_score_, grid_res...
Baron Yugovich
1

votes
1

answer
157

Views

mlr: Tune model parameters with validation set

Just switched to mlr for my machine learning workflow. I am wondering if it is possible to tune hyperparameters using a separate validation set. From my minimum understanding, makeResampleDesc and makeResampleInstance accepts only resampling from training data. My goal is to tune parameters with a v...
Boxuan
1

votes
0

answer
76

Views

Exactly the same score values for different Hyperparamter Configuration sklearn RandomizedSearchCV

I am trying to find optimal Hyperparameter configuration for a sklearn pipeline of a customised unsupervised model which transforms my data into vector representations, which is then used in a randomforest classifier to predict labels. However, when I use randomized search of Sklearn I often get dif...
m3i
1

votes
1

answer
244

Views

Sklearn MLP Classifier Hidden Layers Optimization (RandomizedSearchCV)

I have the following parameters set up : parameter_space = { 'hidden_layer_sizes': [(sp_randint.rvs(100,600,1),sp_randint.rvs(100,600,1),), (sp_randint.rvs(100,600,1),)], 'activation': ['tanh', 'relu', 'logistic'], 'solver': ['sgd', 'adam', 'lbfgs'], 'alpha': stats.uniform(0.0001, 0.9), 'learning_r...
MG_Ghost
1

votes
1

answer
93

Views

Hyperparameter metric in Google Cloud ML should contain the `val` prefix?

When defining the hyperparameter metric for Google Cloud ML I can use mean_squared_error, but should I be using val_mean_squared_error instead if I want it to be comparing the validation set accuracy? Or does it do it on its own? This is the sample hptuning config: trainingInput: ... hyperparameters...
Guilherme Silveira
1

votes
1

answer
58

Views

Different models do incremental fit for RNN model while hyperparameter tuning

I am quite new to deep learning and I was studying this RNN example. After completing the tutorial, I decided to see the effect of various hyperparameters such as the number of nodes in each layer and dropout factor etc. What I do is, for each value in my lists, create a new model using a set of p...
smttsp
1

votes
1

answer
42

Views

AUC of Random forest model is lower after tuning parameters using hypergrid search and CV with 10 folds

The AUC value I received without tuning the hyperparameter was higher. I have used the same training data could there be something I am missing here or some valid explanation. The data is an average of the word embedding of a tweet that is calculated using pretrained GLoVE vectors for tweets with 5...
aastha
1

votes
1

answer
22

Views

how to compare two hyper parameters in a hierarchical model?

In one hierarchical model, we have two hyer parameters: dnorm(A_mu, 0.25^-2) and dnorm (B_mu, 0.25^-2). In this case, 0.25 is the sd, I use the fixed number. A_mu and B_mu represent the mean of group level. After fitting the data by rjags, we get the distributions for each parameter. So I just direc...
Pan
1

votes
0

answer
26

Views

GaussianProcessRegressor Fitting Kernel/Hyperparameters

good day everyone. I have got the following: I am using a GaussianProcessRegressor object from the Sklearn library. After fitting the model, I want to sample points using predict, to get a better idea of what the model looks like so far. But now I do get the issue that it just assumed the points zer...
robTheBob86
1

votes
1

answer
85

Views

num_leaves selection in LightGBM?

Is there any rule of thumb to initialize the num_leaves parameter in lightgbm. For example for 1000 featured dataset, we know that with tree-depth of 10, it can cover the entire dataset, so we can choose this accordingly, and search space for tuning also get limited. But in lightgbm, how we can rou...
Ankish Bansal
1

votes
0

answer
15

Views

How to use smac for hyper-parameter optimization of Convolution Neural Network?

Note: Long Post. Please bear with me I have implemented a convolution neural network in PyTorch on KMNIST dataset. I need to use SMAC to optimize the learning rate and the momentum of Stochastic Gradient Descent of the CNN. I am new in hyperparameter optimization and what I learnt from the smac docu...
Riya208
1

votes
1

answer
909

Views

Hyper-parameters of Gaussian Processes for Regression

I know a Gaussian Process Regression model is mainly specified by its covariance matrix and the free hyper-parameters act as the 'weights'of the model. But could anyone explain what do the 2 hyper-parameters (length-scale & amplitude) in the covariance matrix represent (since they are not 'real' par...
user3692015
1

votes
1

answer
625

Views

How to tune machine learning hyperparameters using MOE?

I am trying to use MOE, the 'Metric Optimization Engine' created at Yelp, to tune hyperparameters for a machine learning algorithm. Their documentation is a bit limited and I'm having a hard time finding examples to follow. Say that I would like to find the optimal values for C, Gamma, and kernel ty...
Eric Xu
1

votes
0

answer
10

Views

Hyperparapeters optimization with grid_search in keras and flow_from_directory

I tried to optimize hyperparameters in my keras CNN made for image classification. I decided to use grid search from sklearn. I overcame the fundamental difficulty with making x and y out of keras flow_from_directory but it still doesn't work. Error in the last line ValueError: dropout is not a leg...
1

votes
1

answer
1.1k

Views

Random search in r-caret

I recently came across the random search option in caret's trainControl() Funktion. How does caret generate the parameters and is there a ways to provide some sort of user-specific input (e.g. a distributions where the parameters are sampled from)? On the website I only found this quote: built-in mo...
winwin
1

votes
1

answer
383

Views

Optimize hyperparameters for deep network

I am currently trying to come up with a novel structure for a CLDNN (Convolutional, LSTM, Deep Neural Network) Just as any other networks, I am having a difficult time optimizing the hyper parameters. I would like to try grid search and random search to get an optimal set of hyperparameters but I am...
unknown_jy
1

votes
1

answer
598

Views

How to tell RandomizedSearchCV to choose from distribution or None value?

Let's say we are trying to find best max_depth parameter of RandomForestClassifier. We are using RandomizedSearchCV: from scipy.stats import randint as sp_randint from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV rf_params = { # I...
delusionX
1

votes
1

answer
87

Views

Google Cloud ML: How can I enforce a pure grid-search for a hyperparameter tuning job

Google Cloud ML uses Bayesian optimisation to mitigate the curse of dimensionality. In specific situations I have hyperparameter tuning jobs in which I want to enforce an exhaustive search over a grid of hyperparameters in a hyperparameter-tuning job. How can I do this? My motivation for enforcing...
1

votes
1

answer
100

Views

How to find and run the largest batch in a dataset before starting training

Question In Tensorflow, I frequently run into OOM errors during the first epoch of training. However, the large nature of the network causes the first epoch to take around an hour, far to long to test new hyper-parameters quickly. Ideally, I'd like to be able to sort the iterator so that I can just...
Evan Weissburg
1

votes
1

answer
153

Views

get hyperparameters during hyperparamter tuning cloud ml-engine

I run a hyperparameter tuning job on cloud ml-engine. Only when a trial is concluded I can get the values of hyperparameters in Job details and in Training output. I wonder if there is a way to get the values of hyperparameters while the trial is running. Edit: I think it's a better idea to dump t...
Fariborz Ghavamian
1

votes
1

answer
402

Views

Keras GridSearch scikit learn freezes

I struggle to implement grid search in Keras using scikit learn. Based on this tutorial, I wrote the following code: from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import GridSearchCV def create_model(): model = Sequential() model.add(Dense(100, input_shape=(max...
Janina Nuber
1

votes
1

answer
440

Views

Python: Alternative to hyperopt that can support multiprocessing?

Is there any other than HyperOpt that can support multiprocessing for a hyper-parameter search? I know that HyperOpt can be configured to use MongoDB but it seems like it is easy to get it wrong and spend a week in the weeds, is there anything that is more popular and effective?
user1367204
1

votes
1

answer
771

Views

Hyperparameter Optimization in Tensorflow

I am doing hyperparameter optimization using Bayesian Optimization in Tensorflow for my Convolutional Neural Network (CNN). And I am getting this error: ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4136,1,180,432] and type float on /job:localhost/replica:0/...
Chaine
1

votes
1

answer
60

Views

Paralleled cross validation for groups of hyperparameters in python

I need to run many cross-validations at once for specific groups of SVR hyperparamters: ((C_0,gamma_0),(C_1,gamma_1)...(C_n,gamma_n)) and thus, seek for a parallelization method to speed it up. Maybe it could be possible to run the GridSearchCV so that instead of checking every possible combination...
DexzMen
1

votes
1

answer
652

Views

hyperparameter tuning in sklearn using RandomizedSearchCV taking lot of time

I am dealing with a data set consists of 13 features and 550068 rows. I did k-fold cross validation and selected k value as 10, and then selected the best model which has least root mean square error in my case the model is Gradient boosting regressor. Then I did hyperparameter tuning here is my cod...
ratan rohith
1

votes
1

answer
0

Views

How to determine optimal number of layers and activation function(s)

So I am working on the MNIST and Boston_Housing datasets using keras, and I was wondering how I would determine the optimal number of layers and activation functions for each layer. Now, I am not asking what the optimal number of layers/activation functions are, but rather the process I should go th...
H. Khan
1

votes
1

answer
0

Views

can't pickle _thread.RLock objects when running tune of ray packge for python (hyper parameter tuning)

I am trying to do a hyper parameter tuning with the tune package of Ray. Shown below is my code: # Disable linter warnings to maintain consistency with tutorial. # pylint: disable=invalid-name # pylint: disable=g-bad-import-order from __future__ import absolute_import from __future__ import division...
Suleka_28
1

votes
1

answer
0

Views

How to define SearchAlgorithm-agnostic, high-dimensional search space in Ray Tune?

I have two questions concerning Ray Tune. First, how can I define a hyperparameter search space independently from the particular SearchAlgorithm used. For instance, HyperOpt uses something like 'height': hp.uniform('height', -100, 100) whereas BayesOpt uses something like 'width': (0, 20); is there...
Rylan Schaeffer
1

votes
1

answer
417

Views

How to get the hyper parameters more easily?

Many models in Machine Learning include hyper parameters. What is the best practice to find those hyper parameters using hold out data? Or what is your way to do that?
gstar2002
0

votes
0

answer
4

Views

Hyperparameter tuning for tf.estimator.Estimator

Would someone mind showing me an example of how to employ hyperparameter tuning for a tf.estimator.Estimator? A brief overview of the code here: def myModel(features, labels, mode, params): ### --- here follows my model --- ### then Classifier = tf.estimator.Estimator(model_fn=myModel, **params) How...
Zack Joubert
1

votes
1

answer
440

Views

How do I optimize the hyperparameters of LightFM?

I am using the LightFM recommender library on my dataset, which gives me the results in the image below. NUM_THREADS = 4 NUM_COMPONENTS = 30 NUM_EPOCHS = 5 ITEM_ALPHA = 1e-6 LEARNING_RATE = 0.005 LEARNING_SCHEDULE = 'adagrad' RANDOM_SEED = 29031994 warp_model = LightFM(loss='warp', learning_rate...
Tim Visser

View additional questions