a_id
int64
7.84k
73.8M
a_body
stringlengths
61
33k
a_creation_date
stringlengths
25
32
a_last_activity_date
stringlengths
25
32
a_last_edit_date
stringlengths
25
32
a_tags
float64
q_id
int64
826
73.8M
q_body
stringlengths
61
29.9k
q_creation_date
stringlengths
25
32
q_last_activity_date
stringlengths
25
32
q_last_edit_date
stringlengths
25
32
q_tags
stringlengths
1
103
_arxiv_links
stringlengths
2
6.69k
_n_arxiv_links
int64
0
94
63,520,211
<p>Sequential consistency provides a <em>single total order</em> of all sequentially consistent operations. So if you have a sequentially consistent store in thread A, and a sequentially consistent load in thread B, and the store is ordered before the load (in said single total order), then B observes the value stored by A. So basically sequential consistency guarantees that the store is &quot;immediately visible&quot; to other threads. A release store does <em>not</em> provide this guarantee.</p> <p>As Peter Cordes pointed out correctly, the term &quot;immediately visible&quot; is rather imprecise. The &quot;visibility&quot; stems from the fact that all seq-cst operations are totally ordered, and all threads observe that order. Since the store and the load are totally ordered, the value of a store becomes visible before a subsequent load (in the single total order) is executed.</p> <p>There exists no such total order between acquire/release operations in different threads, so there is not visibility guarantee. The operations are only ordered once an acquire-operations observes the value from a release-operation, but there is no guarantee <em>when</em> the value of the release-operation becomes visible to the thread performing the acquire-operation.</p> <p>Let's consider what would happen if we were to use acquire/release in this example:</p> <pre><code>void write_x() { x.store(true, std::memory_order_release); } void write_y() { y.store(true, std::memory_order_release); } void read_x_then_y() { while (!x.load(std::memory_order_acquire)); if (y.load(std::memory_order_acquire)) ++z; } void read_y_then_x() { while (!y.load(std::memory_order_acquire)); if (x.load(std::memory_order_acquire)) ++z; } int main() { std::thread a(write_x); std::thread b(write_y); std::thread c(read_x_then_y); std::thread d(read_y_then_x); a.join(); b.join(); c.join(); d.join(); assert(z.load() != 0); // can actually happen!! } </code></pre> <p>Since we have no guarantee about visibility, it could happen that thread <code>c</code> observes <code>x == true</code> and <code>y == false</code>, while at the same time thread <code>d</code> could observe <code>y == true</code> and <code>x == false</code>. So neither thread would increment <code>z</code> and the assertion would fire.</p> <p>For more details about the C++ memory model I can recommend this paper which I have co-authored: <a href="https://arxiv.org/abs/1803.04432" rel="nofollow noreferrer">Memory Models for C/C++ Programmers</a></p>
2020-08-21 09:30:12.460000+00:00
2020-08-21 10:48:18.903000+00:00
2020-08-21 10:48:18.903000+00:00
null
63,519,762
<p>I recently learn about c++ six memory orders, I felt very confusing about <code>memory_order_acquire</code> and <code>memory_order_release</code>, here is an example from cpp:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;thread&gt; #include &lt;atomic&gt; #include &lt;cassert&gt; std::atomic&lt;bool&gt; x = {false}; std::atomic&lt;bool&gt; y = {false}; std::atomic&lt;int&gt; z = {0}; void write_x() { x.store(true, std::memory_order_seq_cst); } void write_y() { y.store(true, std::memory_order_seq_cst); } void read_x_then_y() { while (!x.load(std::memory_order_seq_cst)); if (y.load(std::memory_order_seq_cst)) ++z; } void read_y_then_x() { while (!y.load(std::memory_order_seq_cst)); if (x.load(std::memory_order_seq_cst)) ++z; } int main() { std::thread a(write_x); std::thread b(write_y); std::thread c(read_x_then_y); std::thread d(read_y_then_x); a.join(); b.join(); c.join(); d.join(); assert(z.load() != 0); // will never happen } </code></pre> <p>In the cpp reference page, it says:</p> <blockquote> <p>This example demonstrates a situation where sequential ordering is necessary.</p> <p>Any other ordering may trigger the assert because it would be possible for the threads c and d to observe changes to the atomics x and y in opposite order.</p> </blockquote> <p>So my question is why <strong>memory_order_acquire</strong> and <strong>memory_order_release</strong> can not be used here? And what semantics does memory_order_acquire and memory_order_release provide?</p> <p>some references: <a href="https://en.cppreference.com/w/cpp/atomic/memory_order" rel="nofollow noreferrer">https://en.cppreference.com/w/cpp/atomic/memory_order</a> <a href="https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync" rel="nofollow noreferrer">https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync</a></p>
2020-08-21 09:02:48.680000+00:00
2020-08-28 20:34:48.270000+00:00
2020-08-28 20:34:48.270000+00:00
c++|memory-barriers|memory-model|stdatomic
['https://arxiv.org/abs/1803.04432']
1
14,926,917
<p>The problem that you describe is known as <em>distributed aggregation</em>. There are a number of solutions appropriate for different assumptions on the network (what nodes are connected? can messages be lost?), the function to compute (average? sum?), and so on. A good overview, with references to algorithms that you can use, can be found at <a href="http://arxiv.org/abs/1110.0725" rel="nofollow">http://arxiv.org/abs/1110.0725</a>.</p>
2013-02-17 22:38:10.983000+00:00
2013-02-17 22:38:10.983000+00:00
null
null
14,914,637
<p>I have to write a distributed system with four processes running on four different nodes. The distributed system is supposed to work in the following way: a random number generator generates a random number at each process. The objective is to even out these values in all processes by message passing between processes. Such that process A is the server who gets the numbers from all proceses and then orders them to send a portion of their number to one or more other processes in order to even out all numbers the processes hold. For example A's count is 30, B's count is 65, C's count is 35 and D's count is 70. A computes 30+65+35+70 = 200 divided by 4 = 50. Now process A, the server, knows who has less than average and who has more than average. Now the question is how does A decide who sends what number to who? to even out the values of all processes. please note that A can't directly instruct a process to decrement or increment its count e.g. it can't send a message to B and tell it to decrement by 15 and then send another message to C and tell it to increment by 15. A must send a message to B that will tell B to decrement by 15 and then send a message to C and tell it to increment by 15 or in other words it tell B to send 15 of your count to C. Thanks in advance. Zaki. </p>
2013-02-16 20:10:41.460000+00:00
2013-02-17 22:38:10.983000+00:00
2013-02-16 20:34:47.877000+00:00
algorithm|language-agnostic|distributed
['http://arxiv.org/abs/1110.0725']
1
41,337,433
<p>In general, you should look for methods that offer <strong>incremental</strong> or <strong>online</strong> training. In such you don't have to present to the algorithm the complete data set at once, but rather when new data becomes available. That's essential if the data grows on daily basis and your computational resources are limited. <a href="https://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="nofollow noreferrer">Stochastic gradient descent</a> is a pretty popular optimisation method that meets your requirements.</p> <p>You could use a variation of random forest called <a href="https://arxiv.org/abs/1406.2673" rel="nofollow noreferrer">Mondarian Forest</a>. To quote from the abstract of the linked paper: <em>Mondrian forests achieve competitive predictive performance comparable with existing online random forests and periodically re-trained batch random forests, while being more than an order of magnitude faster, thus representing a better computation vs accuracy tradeoff</em>. The code can be found on <a href="https://github.com/balajiln/mondrianforest" rel="nofollow noreferrer">GitHub</a>.</p> <p>Without knowing your data and nature of your problem it's impossible to offer you specific guidance of what would perform better than random forest. If you would like to stick to the scikit learn, check article <a href="https://scikit-learn.org/stable/computing/scaling_strategies.html" rel="nofollow noreferrer">Strategies to scale computationally: bigger data</a>.</p>
2016-12-27 00:00:16.337000+00:00
2022-08-23 21:20:08.613000+00:00
2022-08-23 21:20:08.613000+00:00
null
41,327,813
<p>I have a dataset which that grows on a daily basis , I am concerned about the fact that , soon it would reach a size that the memory might not be able to accommodate. I am using random forest classifiers and regressors in my application . I have heard of partial fitting , but I don't know if random forest can be done in that manner. How do I ensure that the application doesn't break and continues to perform well even if the data set grows beyond memory size. Also would the scenario be any different if svm were used instead of random forest . </p>
2016-12-26 07:23:30.747000+00:00
2022-08-23 21:20:08.613000+00:00
null
python|machine-learning|scikit-learn|training-data
['https://en.wikipedia.org/wiki/Stochastic_gradient_descent', 'https://arxiv.org/abs/1406.2673', 'https://github.com/balajiln/mondrianforest', 'https://scikit-learn.org/stable/computing/scaling_strategies.html']
4
59,504,442
<p>First <a href="https://en.wikipedia.org/wiki/Job_shop_scheduling" rel="nofollow noreferrer">read this page (Job Shop Scheduling)</a></p> <p>The problem is <strong>shortest path</strong>. For a reasonable approximation of optimal, forget SAT expressions. Try what is obvious. If you run the shortest job on M1 first then that job is ready to use M2 while the next shortest job is using M1.<br>What everyone ignores in these problems is that there are 'phantom machines' consuming time that are the wait states. Maximum productivity is the equivalent of minimum time in wait states. So every job can be represented as a binary string representing time in a task that is productive or non-productive. Every set of strings of length n can be represented by a n-SAT expression. That expression can be reduced to a k-SAT expression where 2 &lt; k &lt; n, in polynomial time.<br>The rest is a 'coding' problem; as in how to 'code' the binary strings so that solving the SAT expression produces what you are seeking.</p> <p>See <a href="https://arxiv.org/abs/cs/0205064" rel="nofollow noreferrer">this (Three complete deterministic polynomial algorithms for 3SAT)</a> to solve the SAT expression.</p>
2019-12-27 18:02:21.023000+00:00
2019-12-27 23:55:54.287000+00:00
2019-12-27 23:55:54.287000+00:00
null
29,366,185
<p>I contact you in order to get an idea on &quot;how to transform a flow shop scheduling problem&quot; into a boolean satisfiability.</p> <p>I already done such reduction for a N*N Sudoku, a N-queens and a Class scheduling problem, but I have some issue on how to transform the flow shop into SAT.</p> <p>a SAT problem looks like this :</p> <p><img src="https://i.stack.imgur.com/SzHSS.png" alt="Illustration of a SAT problem" /></p> <p>The goal is : with different boolean variables, to find an affectation of every variable in order to make the &quot;sentence&quot; true. (If finding a solution is possible).</p> <p>I create my own solver with genetic algorithm able to find a solution and to prove when there is none. And now, I try it on different NP-problems, like Flow Shop.</p> <blockquote> <p>Flow shop scheduling problems are a class of scheduling problems with a workshop or group shop in which the flow control shall enable an appropriate sequencing for each job and for processing on a set of machines or with other resources 1,2,...,m in compliance with given processing orders.</p> <p>Especially the maintaining of a continuous flow of processing tasks is desired with a minimum of idle time and a minimum of waiting time.</p> <p>Flow shop scheduling is a special case of job shop scheduling where there is strict order of all operations to be performed on all jobs.</p> <p>Flow shop scheduling may apply as well to production facilities as to computing designs.</p> <p>(<a href="http://en.wikipedia.org/wiki/Flow_shop_scheduling" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Flow_shop_scheduling</a>)</p> </blockquote> <p>and the result is a sequence of jobs who will go through every workshop and the graphical result will look like this :</p> <p><img src="https://i.stack.imgur.com/lQs2v.gif" alt="Graphical result of a Flow Shop" /></p> <p>So to represent flow-shop instances, in input I have files like this :</p> <pre><code>2 4 4 26 65 62 63 83 57 9 </code></pre> <p>This file means that I have 2 shops and 4 jobs, with all the duration time of each jobs on each machines.</p> <p>The goal : to find the sequence who minimize the C_max, the end-date of the last job on the last machine if you prefer.</p> <p>My Flow-Shop are really simple for now, but I have no idea how to formalize them in order to create a CNF file to execute my SAT solver after.</p> <p>If one of you has some idea : article ? beginning of an idea ?</p> <p>Next part of this question : <a href="https://stackoverflow.com/questions/29651856/flow-job-shop-to-boolean-satisfiability-polynomial-time-reduction-part-2">Flow/Job Shop to Boolean satisfiability [Polynomial-time reduction] part 2</a></p>
2015-03-31 10:16:50.140000+00:00
2019-12-27 23:55:54.287000+00:00
2020-06-20 09:12:55.060000+00:00
algorithm|optimization|reduction|sat
['https://en.wikipedia.org/wiki/Job_shop_scheduling', 'https://arxiv.org/abs/cs/0205064']
2
61,737,860
<p>The convolutional neural network you are trying to implement is a great baseline in the NLP domain. It was introduced for the first time in this <a href="https://arxiv.org/pdf/1408.5882.pdf" rel="nofollow noreferrer">paper</a> (Kim, 2014).</p> <p>I found very useful the code you report but may be more complex than we need. I try to rewrite the network in simple keras (I only miss regularizations)</p> <pre><code>def TextCNN(sequence_length, num_classes, vocab_size, embedding_size, filter_sizes, num_filters, embedding_matrix): sequence_input = Input(shape=(sequence_length,), dtype='int32') embedding_layer = Embedding(vocab_size, embedding_size, weights=[embedding_matrix], input_length=sequence_length, trainable=False) embedded_sequences = embedding_layer(sequence_input) convs = [] for fsz in filter_sizes: x = Conv1D(num_filters, fsz, activation='relu', padding='same')(embedded_sequences) x = MaxPooling1D(pool_size=2)(x) convs.append(x) x = Concatenate(axis=-1)(convs) x = Flatten()(x) x = Dropout(0.5)(x) output = Dense(num_classes, activation='softmax')(x) model = Model(sequence_input, output) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model </code></pre> <p>The initial embedding is set with weights learned in GLOVE. you can upload them or learn new embedding representation with other techniques (Word2Vec or FastText) and upload them. The fit is computed as always</p> <p>I underline that the above is the original representation of the network. If you would like to insert a 100 dense layer before the output it can be simply modified in this way (here a <a href="https://github.com/diegoschapira/CNN-Text-Classifier-using-Keras/blob/master/models.py#L92-L122" rel="nofollow noreferrer">code reference</a>):</p> <pre><code>def TextCNN(sequence_length, num_classes, vocab_size, embedding_size, filter_sizes, num_filters, embedding_matrix): sequence_input = Input(shape=(sequence_length,), dtype='int32') embedding_layer = Embedding(vocab_size, embedding_size, weights=[embedding_matrix], input_length=sequence_length, trainable=False) embedded_sequences = embedding_layer(sequence_input) convs = [] for fsz in filter_sizes: x = Conv1D(num_filters, fsz, activation='relu', padding='same')(embedded_sequences) x = MaxPooling1D(pool_size=2)(x) convs.append(x) x = Concatenate(axis=-1)(convs) x = Flatten()(x) x = Dense(100, activation='relu', name='extractor')(x) x = Dropout(0.5)(x) output = Dense(num_classes, activation='softmax')(x) model = Model(sequence_input, output) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model model = TextCNN(sequence_length=50, num_classes=10, vocab_size=3333, embedding_size=100, filter_sizes=[3,4,5], num_filters=50, embedding_matrix) model.fit(....) </code></pre> <p>To extract the features of our interest we need the output of our Dense100 (that we named 'extractor'). I suggest also <a href="https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/" rel="nofollow noreferrer">this tutorial</a> for filter and feature extraction.</p> <pre><code>extractor = Model(model.input, model.get_layer('extractor').output) representation = extractor.predict(np.random.randint(0,200, (1000,50))) </code></pre> <p>the <code>representation</code> will be an array of shape (n_sample, 100) </p>
2020-05-11 19:35:50.833000+00:00
2020-05-11 19:35:50.833000+00:00
null
null
61,688,104
<p>I am trying to implement the preprocessing code for this <a href="https://arxiv.org/pdf/1908.11540.pdf" rel="nofollow noreferrer">paper</a> (code in this <a href="https://github.com/SenticNet/conv-emotion" rel="nofollow noreferrer">repo</a>). The preprocessing code is described in the paper here:</p> <p>"A convolutional neural network (Kim, 2014) is used to extract textual features from the transcript of the utterances. We use a single convolutional layer followed by max-pooling and a fully connected layer to obtain the feature representations for the utterances. The input to this network is the 300 dimensional pretrained 840B GloVe vectors (Pennington et al., 2014). We use filters of size 3, 4 and 5 with 50 feature maps in each. The convoluted features are then max-pooled with a window size of 2 followed by the ReLU activation (Nair and Hinton, 2010). <strong>These are then concatenated and fed to a 100 dimensional fully connected layer, whose activations form the representation of the utterance.</strong> This network is trained at utterance level with the emotion labels."</p> <p>The authors of the paper state that CNN feature extraction code can be found in this <a href="https://github.com/dennybritz/cnn-text-classification-tf" rel="nofollow noreferrer">repo</a>. However, this code is for a complete model that does sequence classification. It does everything in the quote above except the bolded part (and it goes further to complete do classification). I want the edit the code to build that concatenates and feeds into the 100d layer and then extracts the activations. The data to train on is found in the repo (its the IMDB dataset).</p> <p>The output should be a (100, ) tensor for each sequence. </p> <p>Here's the code for the CNN model:</p> <pre><code>import tensorflow as tf import numpy as np class TextCNN(object): """ A CNN for text classification. Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer. """ def __init__( self, sequence_length, num_classes, vocab_size, embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0): # Placeholders for input, output and dropout self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x") self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y") self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob") # Keeping track of l2 regularization loss (optional) l2_loss = tf.constant(0.0) # Embedding layer with tf.device('/cpu:0'), tf.name_scope("embedding"): self.W = tf.Variable( tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0), name="W") self.embedded_chars = tf.nn.embedding_lookup(self.W, self.input_x) self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars, -1) # Create a convolution + maxpool layer for each filter size pooled_outputs = [] for i, filter_size in enumerate(filter_sizes): with tf.name_scope("conv-maxpool-%s" % filter_size): # Convolution Layer filter_shape = [filter_size, embedding_size, 1, num_filters] W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W") b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b") conv = tf.nn.conv2d( self.embedded_chars_expanded, W, strides=[1, 1, 1, 1], padding="VALID", name="conv") # Apply nonlinearity h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu") # Maxpooling over the outputs pooled = tf.nn.max_pool( h, ksize=[1, sequence_length - filter_size + 1, 1, 1], strides=[1, 1, 1, 1], padding='VALID', name="pool") pooled_outputs.append(pooled) # Combine all the pooled features num_filters_total = num_filters * len(filter_sizes) self.h_pool = tf.concat(pooled_outputs, 3) self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total]) # Add dropout with tf.name_scope("dropout"): self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob) # Final (unnormalized) scores and predictions with tf.name_scope("output"): W = tf.get_variable( "W", shape=[num_filters_total, num_classes], initializer=tf.contrib.layers.xavier_initializer()) b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b") l2_loss += tf.nn.l2_loss(W) l2_loss += tf.nn.l2_loss(b) self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores") self.predictions = tf.argmax(self.scores, 1, name="predictions") # Calculate mean cross-entropy loss with tf.name_scope("loss"): losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y) self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss # Accuracy with tf.name_scope("accuracy"): correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1)) self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy") </code></pre> <p>I want to do the concatenation into the 100d layer to get the activations, I think around line 59 (right before the <code># Add Dropout</code> section near the bottom, and then comment out the rest below it). How do I do this?</p>
2020-05-08 21:07:36.703000+00:00
2021-06-28 08:55:20.967000+00:00
2021-06-28 08:55:20.967000+00:00
python|tensorflow|keras|nlp|conv-neural-network
['https://arxiv.org/pdf/1408.5882.pdf', 'https://github.com/diegoschapira/CNN-Text-Classifier-using-Keras/blob/master/models.py#L92-L122', 'https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/']
3
39,528,832
<p>So your filters in the first layer are of size w x h x 3 (for VGG, w=h=3), I'd suggest taking the mean over the last dimension and using them as w x h x 1. As you've probably figured out, you won't need to change any other weights.</p> <p>This is inspired from a similar approach people have applied to initializing the VGG Imagenet model on flow images.</p> <p><a href="https://arxiv.org/pdf/1507.02159v1.pdf" rel="nofollow">https://arxiv.org/pdf/1507.02159v1.pdf</a> (Secion 2.2)</p>
2016-09-16 10:07:12.043000+00:00
2016-09-16 11:13:10.160000+00:00
2016-09-16 11:13:10.160000+00:00
null
36,787,800
<p>The VGG model accept a 3-channel RGB image as input, but my data are single gray images, any suggestions for how to utilize the weights in first conv layer of VGG model?</p>
2016-04-22 07:39:02.697000+00:00
2018-08-15 06:31:30.430000+00:00
2018-08-15 06:31:30.430000+00:00
deep-learning|convolution|vgg-net
['https://arxiv.org/pdf/1507.02159v1.pdf']
1
46,190,654
<p>For my understanding does the input size only affects the input layer of your network. But please correct me if that is wrong, I'm still quite new to the whole deep learning paradigm.</p> <p>I have used three models of the Tensorflow Object Detection API. The Faster R-CNN and R-FCN, both with Resnet101 Feature extractor and an SSD Model with Inception V2. The SSD Model reshapes the Images to a fixed <code>M x M</code> size. This is also mentioned in the Paper "Speed/accuracy trade-offs for modern convolutional object detectors" by Huang et al., whereas the n Faster R-CNN and R-FCN, models are trained on images scaled to M pixels on the shorter edge. This resizing is located in the preprocessing stage of the model. </p> <p>Another method would be to keep the aspect ratio and crop a fixed size on the image, then one can crop from different positions (center, top-left, top-right, bottom-left, bottom-right etc.) to make the model robust. More sophisticated ways include resizing image to several scales and do the cropping, and using different aspect ratios in convolutional layers with adaptive pooling size later to make the same feature dimension like SPP (see <a href="https://arxiv.org/pdf/1406.4729.pdf" rel="nofollow noreferrer">Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition</a> by He et al. for more detail.) This is the thing that is done by the <code>keep_aspect_ratio_resizer</code> in the config proto.</p> <p>This makes the Architectures for my understanding resilient to different image sizes. So the internal weights of the hidden layers are not affected by the input size of the image.</p>
2017-09-13 06:48:00.757000+00:00
2017-09-13 06:48:00.757000+00:00
null
null
46,164,160
<p>I see that tensorflow object detection API allows one to customise the image sizes which are fed in. My question is how this works with pretrained weights, which are usually trained on 224*224 images, or sometimes 300*300 images. </p> <p>In other frameworks I used, such as caffe rfcn, and yolo and keras ssd, the images are downscaled to fit to the standard size coming with the pretrained weights. </p> <p>Are the pretrained weights used by tf of the 300*300 input size ? And if so, how can we use these weights to classify customised image sizes ? Does tf downsize to the respective weights size ?</p>
2017-09-11 20:48:42.327000+00:00
2017-09-13 06:48:00.757000+00:00
null
tensorflow|object-detection|imagenet
['https://arxiv.org/pdf/1406.4729.pdf']
1
54,777,866
<p>It sounds like overfitting, which isn't surprising since this model is basically a linear regression model.<br> There are few options you can try:<br> 1. add hidden layers + activation functions(<a href="https://arxiv.org/abs/1511.07289" rel="nofollow noreferrer">https://arxiv.org/abs/1511.07289</a>: elu paper works on mnist data set with vanilla DNN).<br> 2. Use either CNN or RNN, although CNN is more apt for image problems.<br> 3. Use a better optimizer. If you are new, try ADAM optimizer (<a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer</a>), and then move onto using momentum with nestrov(<a href="https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer</a>)</p> <p>Without feature engineering, it'll be hard to pull off image classification using just linear regression. Also, you do not need to run softmax on your outcomes since softmax is designed to smooth argmax. Lastly, you should input (None,num_features) into shape of placeholders instead to have variational batch size. This will allow you to directly feed your valid and test datasets into feed_dict without having to create additional tensors.</p>
2019-02-20 02:17:16.013000+00:00
2019-02-20 02:17:16.013000+00:00
null
null
54,772,758
<p>I'm currently learning how to use Tensorflow and I'm having some issues to implement this Softmax Regression aplication.</p> <p>There's no error when compiling but, for some reasson text validation and test predictions shows no improvement, only the train prediction is showing improvement.</p> <p>I'm using Stocastic Gradient Descent(SGD) with minibatches in order to converge faster, but don't know if this could be causing a trouble somehow. </p> <p>I'll be thankful if you could share some ideas, here's the full code:</p> <pre><code>import input_data import numpy as np import random as ran import tensorflow as tf import matplotlib.pyplot as plt mnist = input_data.read_data_sets('MNIST_Data/', one_hot=True) #Features &amp; Data num_features = 784 num_labels = 10 learning_rate = 0.05 batch_size = 128 num_steps = 5001 train_dataset = mnist.train.images train_labels = mnist.train.labels test_dataset = mnist.test.images test_labels = mnist.test.labels valid_dataset = mnist.validation.images valid_labels = mnist.validation.labels graph = tf.Graph() with graph.as_default(): tf_train_data = tf.placeholder(tf.float32, shape=(batch_size, num_features)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_data = tf.constant(valid_dataset) tf_test_data = tf.constant(test_dataset) W = tf.Variable(tf.truncated_normal([num_features, num_labels])) b = tf.Variable(tf.zeros([num_labels])) score_vector = tf.matmul(tf_train_data, W) + b cost_func = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2( labels=tf_train_labels, logits=score_vector)) score_valid = tf.matmul(tf_test_data, W) + b score_test = tf.matmul(tf_valid_data, W) + b optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_func) train_pred = tf.nn.softmax(score_vector) valid_pred = tf.nn.softmax(score_valid) test_pred = tf.nn.softmax(score_test) def accuracy(predictions, labels): correct_pred = np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) accu = (100.0 * correct_pred) / predictions.shape[0] return accu with tf.Session(graph=graph) as sess: sess.run(tf.global_variables_initializer()) print("Initialized") for step in range(num_steps): offset = np.random.randint(0, train_labels.shape[0] - batch_size - 1) batch_data = train_dataset[offset:(offset+batch_size), :] batch_labels = train_labels[offset:(offset+batch_size), :] feed_dict = {tf_train_data : batch_data, tf_train_labels : batch_labels } _, l, predictions = sess.run([optimizer, cost_func, train_pred], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step {0}: {1}".format(step, l)) print("Minibatch accuracy: {:.1f}%".format( accuracy(predictions, batch_labels))) print("Validation accuracy: {:.1f}%".format( accuracy(valid_pred.eval(), valid_labels))) print("\nTest accuracy: {:.1f}%".format( accuracy(test_pred.eval(), test_labels))) </code></pre>
2019-02-19 18:29:15.690000+00:00
2019-02-20 02:17:16.013000+00:00
2019-02-19 20:35:18.123000+00:00
python|tensorflow
['https://arxiv.org/abs/1511.07289', 'https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer']
3
5,317,345
<p>There is a very nice article by Bachmat, Berend, Sapir, Skiena and Stolyarov entitled <a href="http://arxiv.org/abs/physics/0512020" rel="nofollow">Analysis of airplane boarding via space-time geometry and random matrix theory</a> that models this exact problem for airplane boarding. From their abstract:</p> <blockquote> <p>We show that airplane boarding can be asymptotically modeled by 2-dimensional Lorentzian geometry. Boarding time is given by the maximal proper time among curves in the model. Discrepancies between the model and simulation results are closely related to random matrix theory. We then show how such models can be used to explain why some commonly practiced airline boarding policies are ineffective and even detrimental.</p> </blockquote> <p>The conclusions of the paper are:</p> <ul> <li>BEST: Window-Middle-Aisle</li> <li>NEAR OPTIMAL: Random Boarding </li> <li>REALLY BAD: Back-to-Front</li> </ul> <p>For your set-up, I think this means you should ignore <strong>how far down the aisle</strong> the people are and instead focus on <strong>how far from the aisle</strong> they are. This model also accounts for time to store luggage, so you may need to adjust that somewhat for your situation. In any event, I think this confirms what you are finding through your model.</p>
2011-03-15 20:04:40.820000+00:00
2011-03-15 20:04:40.820000+00:00
null
null
5,317,135
<p>I'm trying to find the best algorithm for the following sorting problem.</p> <p>There are <strong>N = K × M</strong> seats in an auditorium with one aisle, <strong>K</strong> rows, and <strong>M</strong> seats per aisle. The assumption is made that <strong>K</strong> is a bigger than <strong>M</strong>, but I don't think that's very important. There are <strong>N</strong> people that are in bijection with the seats (assigned seats). Assuming that people don't like waiting, what's the fastest way to line them up to get them all in their seats as quickly as possible?</p> <p>I ran some simple experiements (using random permutations) and it seemed that letting them line up randomly is faster than having the people in the front third (further down the aisle) line up first, then the middle third, then the back third. That seems wrong to me. </p> <p>I'm writing this in MatLab if that matters at all. Any ideas or answers?</p>
2011-03-15 19:46:48.563000+00:00
2011-06-21 08:14:40.687000+00:00
2011-06-21 08:14:40.687000+00:00
algorithm|sorting|matlab|permutation
['http://arxiv.org/abs/physics/0512020']
1
63,484,988
<p>I got similar issue, when I try to read the paper, <a href="https://arxiv.org/pdf/1206.6910.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1206.6910.pdf</a>, I notice one paragraph</p> <blockquote> <p>Also, simulations and theory (Golyandina, 2010) show that it is better to choose window length L smaller than half of the time series length N. One of the recommended values is N/3.</p> </blockquote> <p>Maybe that's why in the ML.Net Power Anomaly example, the value is chosen to be 30 for the 90 points dataset.</p>
2020-08-19 10:27:52.243000+00:00
2020-08-19 10:27:52.243000+00:00
null
null
61,951,161
<p>In the creation of a SsaSpikeEstimator instance by the <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.ml.timeseriescatalog.detectspikebyssa?view=ml-dotnet#Microsoft_ML_TimeSeriesCatalog_DetectSpikeBySsa_Microsoft_ML_TransformsCatalog_System_String_System_String_System_Int32_System_Int32_System_Int32_System_Int32_Microsoft_ML_Transforms_TimeSeries_AnomalySide_Microsoft_ML_Transforms_TimeSeries_ErrorFunction_" rel="nofollow noreferrer">DetectSpikeBySsa method</a>, there is a parameter called <code>pvalueHistoryLength</code> - could anybody please help me understand, for any given time series with X points, which is the optimal value for this parameter?</p>
2020-05-22 08:49:01.597000+00:00
2020-08-19 10:27:52.243000+00:00
null
ml.net
['https://arxiv.org/pdf/1206.6910.pdf']
1
25,853,210
<p>Actually this problem has been studied as <strong>Canonical coin systems</strong>, and we even got a paper about how to determine whether a given coin system can support a greedy solution. The original paper may give you some insights: <a href="http://arxiv.org/pdf/0809.0400.pdf" rel="nofollow">Canonical coin systems for change-making problems</a>.</p> <p>Alternatively, you can google the key word "Canonical coin systems" for more information.</p>
2014-09-15 17:07:05.293000+00:00
2014-09-15 18:33:50.150000+00:00
2014-09-15 18:33:50.150000+00:00
null
25,852,951
<p>This question occurred as part of an increasingly-difficult problem in an interview. It started ever so simply:</p> <blockquote> <p>(1) Assuming an infinite supply of coins (in the usual 1, 5, 10, 25 cent denominations). Given <em>n</em> cents, is there always a way to make change for it using the normal denominations?</p> </blockquote> <p>Yes, since the penny divides all possible values of <em>n</em> cents.</p> <blockquote> <p>(2) Good, now write a program that accepts <em>n</em> (positive) cents, and returns one possible way of making change for it</p> </blockquote> <p>Return <em>n</em> pennies.</p> <blockquote> <p>(3) Smart ass. What if you want to minimize the number of coins required to make the change?</p> </blockquote> <p>Start with the largest denomination <em>d_i</em>, and take the maximum number of them such that you don't exceed <em>n</em>, <em>m_i</em>. Take <em>n - (d_i)(m_i)</em> and repeat for next largest denomination.</p> <blockquote> <p>(4) Good, can you prove this solution is optimal?</p> </blockquote> <p>Yes, { blah, blah }</p> <blockquote> <p>(5) Ok, *smirk* , now what if, in addition to the <em>n</em> cents, you were given an arbitrary-sized array consisting of arbitrary denominations? You can assume each denomination occurs only once in the array, and that all denominations are positive </p> </blockquote> <p>My initial thought was just to sort the array of denominations, and apply the same logic as in (4). Luckily, before I communicated this, I caught myself and realized it wouldn't work. But now I realized I was in a pickle.</p> <p>My next thought was to apply the sum-subset problem to each divisor of <em>n</em>, but realized this was probably overkill. The solution I ended up providing just used the <a href="http://en.wikipedia.org/wiki/Change-making_problem" rel="nofollow">Change-making problem</a>, and short-circuited it when I found <em>some</em> solution. I feel like there has to be a smarter way of doing this though..</p> <p>The problem reduces to: <strong>Given a finite set <em>S</em> of distinct natural numbers, find a linear combination of elements of <em>S</em> that (1) sum to another natural number <em>n</em>, (2) minimize the sum of coefficients in the lin.combination</strong></p>
2014-09-15 16:50:08.627000+00:00
2014-09-15 18:33:50.150000+00:00
2014-09-15 17:38:35.820000+00:00
algorithm
['http://arxiv.org/pdf/0809.0400.pdf']
1
68,144,617
<p>One option is as relatively_random suggests to optimize over the axis-angle parameterization. The derivative can then, relatively simple, be computed as described in <a href="https://arxiv.org/pdf/1312.0788.pdf" rel="nofollow noreferrer">this paper</a>. The only problem might be that some numerical issues might arise for rotations close to the identity.</p> <pre><code>import numpy as np def hat(v): &quot;&quot;&quot; vecotrized version of the hat function, creating for a vector its skew symmetric matrix. Args: v (np.array&lt;float&gt;(..., 3, 1)): The input vector. Returns: (np.array&lt;float&gt;(..., 3, 3)): The output skew symmetric matrix. &quot;&quot;&quot; E1 = np.array([[0., 0., 0.], [0., 0., -1.], [0., 1., 0.]]) E2 = np.array([[0., 0., 1.], [0., 0., 0.], [-1., 0., 0.]]) E3 = np.array([[0., -1., 0.], [1., 0., 0.], [0., 0., 0.]]) return v[..., 0:1, :] * E1 + v[..., 1:2, :] * E2 + v[..., 2:3, :] * E3 def exp(v, der=False): &quot;&quot;&quot; Vectorized version of the exponential map. Args: v (np.array&lt;float&gt;(..., 3, 1)): The input axis-angle vector. der (bool, optional): Wether to output the derivative as well. Defaults to False. Returns: R (np.array&lt;float&gt;(..., 3, 3)): The corresponding rotation matrix. [dR (np.array&lt;float&gt;(3, ..., 3, 3)): The derivative of each rotation matrix. The matrix dR[i, ..., :, :] corresponds to the derivative d R[..., :, :] / d v[..., i, :], so the derivative of the rotation R gained through the axis-angle vector v with respect to v_i. Note that this is not a Jacobian of any form but a vectorized version of derivatives.] &quot;&quot;&quot; n = np.linalg.norm(v, axis=-2, keepdims=True) H = hat(v) with np.errstate(all='ignore'): R = np.identity(3) + (np.sin(n) / n) * H + ((1 - np.cos(n)) / n**2) * (H @ H) R = np.where(n == 0, np.identity(3), R) if der: sh = (3,) + tuple(1 for _ in range(v.ndim - 2)) + (3, 1) dR = np.swapaxes(np.expand_dims(v, axis=0), 0, -2) * H dR = dR + hat(np.cross(v, ((np.identity(3) - R) @ np.identity(3).reshape(sh)), axis=-2)) dR = dR @ R n = n**2 # redifinition with np.errstate(all='ignore'): dR = dR / n dR = np.where(n == 0, hat(np.identity(3).reshape(sh)), dR) return R, dR else: return R # generate two sets of points which differ by a rotation np.random.seed(1001) n = 100 # number of points p_1 = np.random.randn(n, 3, 1) v = np.array([0.3, -0.2, 0.1]).reshape(3, 1) # the axis-angle vector p_2 = exp(v) @ p_1 + np.random.randn(n, 3, 1) * 1e-2 # estimate v with least sqaures, so the objective function becomes: # minimize v over f(v) = sum_[1&lt;=i&lt;=n] (||p_1_i - exp(v)p_2_i||^2) # Due to the way least_squres is implemented we have to pass the # individual residuals ||p_1_i - exp(v)p_2_i||^2 as ||p_1_i - exp(v)p_2_i||. from scipy.optimize import least_squares def loss(x): R = exp(x.reshape(1, 3, 1)) y = p_2 - R @ p_1 y = np.linalg.norm(y, axis=-2).squeeze(-1) return y def d_loss(x): R, d_R = exp(x.reshape(1, 3, 1), der=True) y = p_2 - R @ p_1 d_y = -d_R @ p_1 d_y = np.sum(y * d_y, axis=-2) / np.linalg.norm(y, axis=-2) d_y = d_y.squeeze(-1).T return d_y x0 = np.zeros((3)) res = least_squares(loss, x0, d_loss) print('True axis-angle vector: {}'.format(v.reshape(-1))) print('Estimated axis-angle vector: {}'.format(res.x)) </code></pre>
2021-06-26 17:08:13.090000+00:00
2021-06-26 17:18:52.990000+00:00
2021-06-26 17:18:52.990000+00:00
null
33,813,743
<p>I had a chat with an engineer the other day and we both were stumped on a question related to bundle adjustment. For a refresher, here is a good link explaining the problem:</p> <p><a href="http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/ZISSERMAN/bundle/bundle.html" rel="noreferrer">http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/ZISSERMAN/bundle/bundle.html</a></p> <p>The problem requires optimization over 3n+11m parameters. The camera optimization consists of 5 intrinsic camera parameters, 3 DOF for position (x,y,z), and 3 DOF for rotation (pitch, yaw and roll).</p> <p>Now, when you actually go about implementing this algorithm, a rotation matrix consists an optimization over 9 numbers. Euler's Axis Theorem says these 9 numbers are related and there are only 3 degrees of freedom overall.</p> <p>Suppose you represent the rotation using a normalized quaternion. Then you have optimization over 3 numbers. Same DOF.</p> <p>Is one representation more computationally efficient and better than the other? Will you have less variables to optimize using a rotation quaternion over rotation matrix? </p>
2015-11-19 20:26:50.517000+00:00
2021-06-26 17:18:52.990000+00:00
2015-11-19 20:34:14.253000+00:00
graphics|3d|computer-vision
['https://arxiv.org/pdf/1312.0788.pdf']
1
11,941,732
<p>(Edit Aug 2013): In the new version 1.0 of OpenCPU, the security profiles in the OpenCPU cloud server are located in <code>/etc/apparmor.d/opencpu.d</code>. The easiest way to add custom rules is by adding them to the <code>/etc/apparmor.d/opencpu.d/custom</code> file. See the <a href="https://public.opencpu.org/download.html" rel="nofollow">OpenCPU server manual</a> for more information.</p> <p>You should probably study a bit of AppArmor syntax to understand how the profiles are structured. See for more information the <a href="https://github.com/jeroenooms/RAppArmor#readme" rel="nofollow">RAppArmor package</a> and <a href="http://arxiv.org/abs/1303.4808" rel="nofollow">JSS article</a>.</p>
2012-08-13 20:24:22.363000+00:00
2013-08-26 13:24:48.197000+00:00
2013-08-26 13:24:48.197000+00:00
null
11,927,889
<p>UPDATE #2: Again jeroen, between you and I, this will be like the new FAQ for OpenCPU &lt;3. the sandbox is a great idea cant we just put the scripts inside the sandbox? I don't want to take away the security with the sandbox off, can you make a way to allow only certain R packages full access to the server? I am fine with manually approving which ones will have full access, Like an admin panel of sorts? Is there a way that an admin such as myself can put my own scripts inside the server sandbox so that it can run moodifications with full access, whereas other users won't be able to make them?</p> <p>UPDATE : The openCPU has some sort of protection against the system from running files not in a datastore. How do I disable this, I just want it to run like R does on the same machine. I know potentially people can access files outside of the datastore openCPU system without having the /datastore/ infront of a file url</p> <p>I placed a file in /opt/myData/test.csv I can run in R on the same box the function I want and it works readTheFile("/opt/myData/test.csv");</p> <p>Now when I try to use OpenCPU to call it using REST it does not work! I have tried even putting the file on a remote server and reading the file in as Endpoint : /R/mypackage/readTheFile filePath = "http://www.myotherserver.com/test.csv"</p> <p>Also I tried this below, which gave me cannot open URL 'http://localhost/R/store/opt/Data-Sets/rds' Endpoint : /R/mypackage/readTheFile filePath = "/opt/myData/test.csv"</p> <p>Please Help</p>
2012-08-13 03:34:06.877000+00:00
2013-08-26 13:24:48.197000+00:00
2012-08-13 19:37:48.173000+00:00
opencpu
['https://public.opencpu.org/download.html', 'https://github.com/jeroenooms/RAppArmor#readme', 'http://arxiv.org/abs/1303.4808']
3
70,138,197
<p><strong>The bottleneck in all of your examples is the predecoder.</strong></p> <p>I analyzed your examples with my simulator uiCA (<a href="https://uica.uops.info/" rel="nofollow noreferrer">https://uica.uops.info/</a>, <a href="https://github.com/andreas-abel/uiCA" rel="nofollow noreferrer">https://github.com/andreas-abel/uiCA</a>). It predicts the following throughputs, which closely match your measurements:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>TP</th> <th>Link</th> </tr> </thead> <tbody> <tr> <td>g1a</td> <td>13.00</td> <td><a href="https://uica.uops.info/?code=.test7:%0D%0A%09times%2032%20nop%0D%0A%09mov%20eax,-1%0D%0A%09mov%20ebx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09mov%20edx,-1%0D%0A%09mov%20edi,-1%0D%0A%09mov%20esi,-1%0D%0A%09mov%20r8d,-1%0D%0A%09mov%20r9d,-1%0D%0A%09mov%20r10d,-1%0D%0A%09mov%20r11d,-1%0D%0A%09mov%20r12d,-1%0D%0A%09mov%20r13d,-1%0D%0A%09mov%20r14d,-1%0D%0A%09mov%20r15d,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test7%0D%0A&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g1b</td> <td>14.00</td> <td><a href="https://uica.uops.info/?code=.test3:%0D%0A%09times%2032%20nop%0D%0A%09mov%20rax,-1%0D%0A%09mov%20rbx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09mov%20rdx,-1%0D%0A%09mov%20rdi,-1%0D%0A%09mov%20rsi,-1%0D%0A%09mov%20r8,-1%0D%0A%09mov%20r9,-1%0D%0A%09mov%20r10,-1%0D%0A%09mov%20r11,-1%0D%0A%09mov%20r12,-1%0D%0A%09mov%20r13,-1%0D%0A%09mov%20r14,-1%0D%0A%09mov%20r15,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test3&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g2a</td> <td>16.00</td> <td><a href="https://uica.uops.info/?code=.test6:%0D%0A%09times%2032%20nop%0D%0A%09xor%20eax,%20eax%0D%0A%09dec%20eax%09%0D%0A%09xor%20ebx,%20ebx%0D%0A%09dec%20ebx%0D%0A%09xor%20edx,%20edx%0D%0A%09dec%20edx%0D%0A%09xor%20edi,%20edi%0D%0A%09dec%20edi%0D%0A%09xor%20esi,%20esi%0D%0A%09dec%20esi%0D%0A%09xor%20r8d,%20r8d%0D%0A%09dec%20r8d%0D%0A%09xor%20r9d,%20r9d%0D%0A%09dec%20r9d%0D%0A%09xor%20r10d,%20r10d%0D%0A%09dec%20r10d%0D%0A%09xor%20r11d,%20r11d%0D%0A%09dec%20r11d%0D%0A%09xor%20r12d,%20r12d%0D%0A%09dec%20r12d%0D%0A%09xor%20r13d,%20r13d%0D%0A%09dec%20r13d%0D%0A%09xor%20r14d,%20r14d%0D%0A%09dec%20r14d%0D%0A%09xor%20r15d,%20r15d%0D%0A%09dec%20r15d%0D%0A%09dec%20ecx%0D%0A%09jge%20.test6&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g2b</td> <td>17.00</td> <td><a href="https://uica.uops.info/?code=.test2:%0D%0A%09times%2032%20nop%0D%0A%09xor%20eax,%20eax%0D%0A%09dec%20rax%09%0D%0A%09xor%20ebx,%20ebx%0D%0A%09dec%20rbx%0D%0A%09xor%20edx,%20edx%0D%0A%09dec%20rdx%0D%0A%09xor%20edi,%20edi%0D%0A%09dec%20rdi%0D%0A%09xor%20esi,%20esi%0D%0A%09dec%20rsi%0D%0A%09xor%20r8d,%20r8d%0D%0A%09dec%20r8%0D%0A%09xor%20r9d,%20r9d%0D%0A%09dec%20r9%0D%0A%09xor%20r10d,%20r10d%0D%0A%09dec%20r10%0D%0A%09xor%20r11d,%20r11d%0D%0A%09dec%20r11%0D%0A%09xor%20r12d,%20r12d%0D%0A%09dec%20r12%0D%0A%09xor%20r13d,%20r13d%0D%0A%09dec%20r13%0D%0A%09xor%20r14d,%20r14d%0D%0A%09dec%20r14%0D%0A%09xor%20r15d,%20r15d%0D%0A%09dec%20r15%0D%0A%09dec%20ecx%0D%0A%09jge%20.test2&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g3a</td> <td>17.00</td> <td><a href="https://uica.uops.info/?code=.test0:%0D%0A%09times%2032%20nop%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20eax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20edx,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20edi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20esi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20eax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r8d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r9d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r10d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r11d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r12d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r13d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r14d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r15d,%5Brbx-1%5D%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test0&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g3b</td> <td>18.00</td> <td><a href="https://uica.uops.info/?code=.test00:%0D%0A%09times%2032%20nop%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rdx,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rdi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rsi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r8,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r9,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r10,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r11,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r12,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r13,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r14,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r15,%5Brbx-1%5D%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test00&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g4a</td> <td>12.00</td> <td><a href="https://uica.uops.info/?code=.test5:%0D%0A%09times%2032%20nop%0D%0A%09or%20eax,-1%0D%0A%09or%20ebx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09or%20edx,-1%0D%0A%09or%20edi,-1%0D%0A%09or%20esi,-1%0D%0A%09or%20r8d,-1%0D%0A%09or%20r9d,-1%0D%0A%09or%20r10d,-1%0D%0A%09or%20r11d,-1%0D%0A%09or%20r12d,-1%0D%0A%09or%20r13d,-1%0D%0A%09or%20r14d,-1%0D%0A%09or%20r15d,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test5&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> <tr> <td>g4b</td> <td>12.00</td> <td><a href="https://uica.uops.info/?code=.test1:%0D%0A%09times%2032%20nop%0D%0A%09or%20rax,-1%0D%0A%09or%20rbx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09or%20rdx,-1%0D%0A%09or%20rdi,-1%0D%0A%09or%20rsi,-1%0D%0A%09or%20r8,-1%0D%0A%09or%20r9,-1%0D%0A%09or%20r10,-1%0D%0A%09or%20r11,-1%0D%0A%09or%20r12,-1%0D%0A%09or%20r13,-1%0D%0A%09or%20r14,-1%0D%0A%09or%20r15,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test1&amp;syntax=NASM&amp;uArchs=SKL&amp;tools=uiCA&amp;alignment=0&amp;uiCAHtmlOptions=traceTable&amp;uiCAHtmlOptions=graph" rel="nofollow noreferrer">https://uica.uops.info/?code=...</a></td> </tr> </tbody> </table> </div> <p>The trace table that uiCA generates provides some insights into how the code is executed. For g1a, for example, it generates the following trace: <a href="https://i.stack.imgur.com/GI1K2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GI1K2.png" alt="Trace for g1a" /></a></p> <p>You can see that for the 32 nops, the predecoder requires 8 cycles, and for the remaining instructions, it requires 5 cycles, which together corresponds to the 13 cycles that you measured.</p> <p>You may notice that in some cycles, only a small number of instructions is predecoded; for example, in the fourth cycle, only one instruction is predecoded. This is because the predecoder works on aligned 16-byte blocks, and it can handle at most five instructions per cycle (note that some sources incorrectly claim that it can handle 6 instructions per cycle). You can find more details on the predecoder, for example how it handles instructions that cross a 16-byte boundary, in <a href="https://arxiv.org/pdf/2107.14210.pdf" rel="nofollow noreferrer">this paper</a>.</p> <p>If you compare this trace with the trace for g1b, <a href="https://i.stack.imgur.com/NFcW0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NFcW0.png" alt="enter image description here" /></a> you can see that the instructions after the nops now require 6 instead of 5 cycles to be predecoded, which is because several of the instructions in g1b are longer than the corresponding ones in g1a.</p>
2021-11-27 19:49:57.733000+00:00
2021-11-27 20:04:30.747000+00:00
2021-11-27 20:04:30.747000+00:00
null
70,131,766
<p>I am trying to compare the methods mentioned by Peter Cordes in <a href="https://stackoverflow.com/a/45113467/17187836">his answer</a> to the question that 'set all bits in CPU register to 1'.</p> <p>Therefore, I write a benchmark to set all 13 registers to all bits 1 except <code>e/rsp</code>, <code>e/rbp</code>, and <code>e/rcx</code>.</p> <p>The code is like below. <code>times 32 nop</code> is used to avoid DSB and LSD influence.</p> <pre><code>mov ecx, 100000000 Align 32 .test3: times 32 nop mov rax,-1 mov rbx,-1 ;mov ecx,-1 mov rdx,-1 mov rdi,-1 mov rsi,-1 mov r8,-1 mov r9,-1 mov r10,-1 mov r11,-1 mov r12,-1 mov r13,-1 mov r14,-1 mov r15,-1 dec ecx jge .test3 jmp .out </code></pre> <p>I test below methods he mentioned, and <a href="https://github.com/moep0/relativeCode/tree/main/2021/1126" rel="nofollow noreferrer">Full code in here</a></p> <pre><code>mov e/rax, -1 xor eax, eax dec e/rax xor ecx, ecx lea e/rax, [rcx-1] or e/rax, -1 </code></pre> <p>To make this question more concise, I will use <code>group1 a (g1a)</code> to replace <code>mov eax,-1</code> in the below tables.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>number</th> <th>pattern</th> <th>test number</th> </tr> </thead> <tbody> <tr> <td>group1 a</td> <td>mov eax,-1</td> <td>test 7</td> </tr> <tr> <td>group1 b</td> <td>mov rax,-1</td> <td>test3</td> </tr> <tr> <td>group2 a</td> <td>xor eax, eax / dec eax</td> <td>test6</td> </tr> <tr> <td>group2 b</td> <td>xor eax, eax / dec rax</td> <td>test2</td> </tr> <tr> <td>group3 a</td> <td>xor ecx, ecx / lea eax, [rcx-1]</td> <td>test0</td> </tr> <tr> <td>group3 b</td> <td>xor ecx, ecx / lea rax, [rcx-1]</td> <td>test-1(test00)</td> </tr> <tr> <td>group4 a</td> <td>or eax,-1</td> <td>test5</td> </tr> <tr> <td>group4 b</td> <td>or rax,-1</td> <td>test1</td> </tr> </tbody> </table> </div> <p>The table below shows that from group 1 to group 3, when using 64 bit registers, there is 1 more cycle per loop.</p> <p>The IDQ_UOPS_NOT_DELIVERED also increases, which may explain the growing number of cycles. <strong>But can this explain the exact 1 more cycle per loop?</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>cycles</th> <th>MITE cycles(r1002479)</th> <th>MITE 4uops cycles (r4002479)</th> <th>IDQ UOPS NOT DELIVERED(r19c)</th> </tr> </thead> <tbody> <tr> <td>g1a</td> <td>1,300,903,705</td> <td>1,300,104,496</td> <td>800,055,137</td> <td>601,487,115</td> </tr> <tr> <td>g1b</td> <td>1,400,852,931</td> <td>1,400,092,325</td> <td>800,049,313</td> <td>1,001,524,712</td> </tr> <tr> <td>g2a</td> <td>1,600,920,156</td> <td>1,600,113,480</td> <td>1,300,061,359</td> <td>501,522,554</td> </tr> <tr> <td>g2b</td> <td>1,700,834,769</td> <td>1,700,108,688</td> <td>1,300,057,576</td> <td>901,467,008</td> </tr> <tr> <td>g3a</td> <td>1,701,971,425</td> <td>1,700,093,298</td> <td>1,300,111,482</td> <td>902,327,493</td> </tr> <tr> <td>g3b</td> <td>1,800,891,861</td> <td>1,800,110,096</td> <td>1,300,059,338</td> <td>1,301,497,001</td> </tr> <tr> <td>g4a</td> <td>1,201,164,208</td> <td>1,200,122,275</td> <td>1,100,049,081</td> <td>201,592,292</td> </tr> <tr> <td>g4b</td> <td>1,200,553,577</td> <td>1,200,074,422</td> <td>1,100,031,729</td> <td>200,772,985</td> </tr> </tbody> </table> </div> <p>Besides, the port distribution of g2a and g2b is different, unlike g1a and g1b (g1a is the same as g1b in port distribution), or g3a and g3b.</p> <p><strong>And if I comment <code>times 32 nop</code>, this phenomenon disappears. Is it related to MITE?</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>p0</th> <th>p1</th> <th>p2</th> <th>p3</th> <th>p4</th> <th>p5</th> <th>p6</th> <th>p7</th> </tr> </thead> <tbody> <tr> <td>g1a</td> <td>299,868,019</td> <td>300,014,657</td> <td>5,925</td> <td>7,794</td> <td>16,589</td> <td>300,279,232</td> <td>499,885,294</td> <td>7,242</td> </tr> <tr> <td>g1b</td> <td>299,935,968</td> <td>300,085,089</td> <td>6,622</td> <td>8,758</td> <td>18,842</td> <td>299,935,445</td> <td>500,426,436</td> <td>7,336</td> </tr> <tr> <td>g2a</td> <td>299,800,192</td> <td>299,758,460</td> <td>7,461</td> <td>9,635</td> <td>20,622</td> <td>399,836,486</td> <td>400,312,354</td> <td>8,446</td> </tr> <tr> <td>g2b</td> <td>200,047,079</td> <td>200,203,026</td> <td>7,899</td> <td>9,967</td> <td>21,539</td> <td>500,542,313</td> <td>500,296,034</td> <td>9,635</td> </tr> <tr> <td>g3a</td> <td>36,568</td> <td>550,860,773</td> <td>7,784</td> <td>10,147</td> <td>22,538</td> <td>749,063,082</td> <td>99,856,623</td> <td>9,767</td> </tr> <tr> <td>g3b</td> <td>36,858</td> <td>599,960,197</td> <td>8,232</td> <td>10,763</td> <td>23,086</td> <td>700,499,893</td> <td>100,078,368</td> <td>9,513</td> </tr> <tr> <td>g4a</td> <td>200,142,036</td> <td>300,600,535</td> <td>5,383</td> <td>6,705</td> <td>15,344</td> <td>400,045,302</td> <td>500,364,377</td> <td>6,802</td> </tr> <tr> <td>g4b</td> <td>200,224,703</td> <td>300,284,609</td> <td>5,464</td> <td>7,031</td> <td>15,817</td> <td>400,047,050</td> <td>499,467,546</td> <td>6,746</td> </tr> </tbody> </table> </div> <p>Environment: intel i7-10700, ubuntu 20.04, and NASM 2.14.02.</p> <p><em>It is a little bit hard for me to explain this in English. Please comment if the description is unclear.</em></p>
2021-11-27 03:10:43.317000+00:00
2021-11-27 20:04:30.747000+00:00
2021-11-27 06:35:46.210000+00:00
assembly|x86-64|intel|cpu-architecture|micro-optimization
['https://uica.uops.info/', 'https://github.com/andreas-abel/uiCA', 'https://uica.uops.info/?code=.test7:%0D%0A%09times%2032%20nop%0D%0A%09mov%20eax,-1%0D%0A%09mov%20ebx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09mov%20edx,-1%0D%0A%09mov%20edi,-1%0D%0A%09mov%20esi,-1%0D%0A%09mov%20r8d,-1%0D%0A%09mov%20r9d,-1%0D%0A%09mov%20r10d,-1%0D%0A%09mov%20r11d,-1%0D%0A%09mov%20r12d,-1%0D%0A%09mov%20r13d,-1%0D%0A%09mov%20r14d,-1%0D%0A%09mov%20r15d,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test7%0D%0A&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test3:%0D%0A%09times%2032%20nop%0D%0A%09mov%20rax,-1%0D%0A%09mov%20rbx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09mov%20rdx,-1%0D%0A%09mov%20rdi,-1%0D%0A%09mov%20rsi,-1%0D%0A%09mov%20r8,-1%0D%0A%09mov%20r9,-1%0D%0A%09mov%20r10,-1%0D%0A%09mov%20r11,-1%0D%0A%09mov%20r12,-1%0D%0A%09mov%20r13,-1%0D%0A%09mov%20r14,-1%0D%0A%09mov%20r15,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test3&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test6:%0D%0A%09times%2032%20nop%0D%0A%09xor%20eax,%20eax%0D%0A%09dec%20eax%09%0D%0A%09xor%20ebx,%20ebx%0D%0A%09dec%20ebx%0D%0A%09xor%20edx,%20edx%0D%0A%09dec%20edx%0D%0A%09xor%20edi,%20edi%0D%0A%09dec%20edi%0D%0A%09xor%20esi,%20esi%0D%0A%09dec%20esi%0D%0A%09xor%20r8d,%20r8d%0D%0A%09dec%20r8d%0D%0A%09xor%20r9d,%20r9d%0D%0A%09dec%20r9d%0D%0A%09xor%20r10d,%20r10d%0D%0A%09dec%20r10d%0D%0A%09xor%20r11d,%20r11d%0D%0A%09dec%20r11d%0D%0A%09xor%20r12d,%20r12d%0D%0A%09dec%20r12d%0D%0A%09xor%20r13d,%20r13d%0D%0A%09dec%20r13d%0D%0A%09xor%20r14d,%20r14d%0D%0A%09dec%20r14d%0D%0A%09xor%20r15d,%20r15d%0D%0A%09dec%20r15d%0D%0A%09dec%20ecx%0D%0A%09jge%20.test6&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test2:%0D%0A%09times%2032%20nop%0D%0A%09xor%20eax,%20eax%0D%0A%09dec%20rax%09%0D%0A%09xor%20ebx,%20ebx%0D%0A%09dec%20rbx%0D%0A%09xor%20edx,%20edx%0D%0A%09dec%20rdx%0D%0A%09xor%20edi,%20edi%0D%0A%09dec%20rdi%0D%0A%09xor%20esi,%20esi%0D%0A%09dec%20rsi%0D%0A%09xor%20r8d,%20r8d%0D%0A%09dec%20r8%0D%0A%09xor%20r9d,%20r9d%0D%0A%09dec%20r9%0D%0A%09xor%20r10d,%20r10d%0D%0A%09dec%20r10%0D%0A%09xor%20r11d,%20r11d%0D%0A%09dec%20r11%0D%0A%09xor%20r12d,%20r12d%0D%0A%09dec%20r12%0D%0A%09xor%20r13d,%20r13d%0D%0A%09dec%20r13%0D%0A%09xor%20r14d,%20r14d%0D%0A%09dec%20r14%0D%0A%09xor%20r15d,%20r15d%0D%0A%09dec%20r15%0D%0A%09dec%20ecx%0D%0A%09jge%20.test2&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test0:%0D%0A%09times%2032%20nop%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20eax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20edx,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20edi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20esi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20eax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r8d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r9d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r10d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r11d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r12d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r13d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r14d,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r15d,%5Brbx-1%5D%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test0&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test00:%0D%0A%09times%2032%20nop%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rdx,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rdi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rsi,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20rax,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r8,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r9,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r10,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r11,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r12,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r13,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r14,%5Brbx-1%5D%0D%0A%09xor%20ebx,ebx%0D%0A%09lea%20r15,%5Brbx-1%5D%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test00&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test5:%0D%0A%09times%2032%20nop%0D%0A%09or%20eax,-1%0D%0A%09or%20ebx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09or%20edx,-1%0D%0A%09or%20edi,-1%0D%0A%09or%20esi,-1%0D%0A%09or%20r8d,-1%0D%0A%09or%20r9d,-1%0D%0A%09or%20r10d,-1%0D%0A%09or%20r11d,-1%0D%0A%09or%20r12d,-1%0D%0A%09or%20r13d,-1%0D%0A%09or%20r14d,-1%0D%0A%09or%20r15d,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test5&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://uica.uops.info/?code=.test1:%0D%0A%09times%2032%20nop%0D%0A%09or%20rax,-1%0D%0A%09or%20rbx,-1%0D%0A%09%3Bmov%20ecx,-1%0D%0A%09or%20rdx,-1%0D%0A%09or%20rdi,-1%0D%0A%09or%20rsi,-1%0D%0A%09or%20r8,-1%0D%0A%09or%20r9,-1%0D%0A%09or%20r10,-1%0D%0A%09or%20r11,-1%0D%0A%09or%20r12,-1%0D%0A%09or%20r13,-1%0D%0A%09or%20r14,-1%0D%0A%09or%20r15,-1%0D%0A%0D%0A%09dec%20ecx%0D%0A%09jge%20.test1&syntax=NASM&uArchs=SKL&tools=uiCA&alignment=0&uiCAHtmlOptions=traceTable&uiCAHtmlOptions=graph', 'https://i.stack.imgur.com/GI1K2.png', 'https://arxiv.org/pdf/2107.14210.pdf', 'https://i.stack.imgur.com/NFcW0.png']
13
29,353,574
<p>I had to solve a similar problem recently that involved counting the number of indentations on blobs within in an image (basically, the connected components returned by bwconncomp). The method I used was to look at curvature changes along the boundary calculated via the FFT. In your case, the red blobs would have a large number of curvature variations, whereas the black regions would not. It's a pretty easy calculation and relatively fast. The code is on github here:</p> <p><a href="https://github.com/mjsottile/blobdents" rel="nofollow">https://github.com/mjsottile/blobdents</a></p> <p>The file of interest is src/countindents.m. A short description of the approach is here:</p> <p><a href="http://arxiv.org/abs/1501.07692" rel="nofollow">http://arxiv.org/abs/1501.07692</a></p>
2015-03-30 18:30:48.717000+00:00
2015-03-30 18:30:48.717000+00:00
null
null
29,350,867
<p>I'm working on MATLAB on some regions inside an image. I'm at a point in which I would like to be able to separate regions which exhibit some kind of regularity (e.g., being circle-ish or square-ish) from regions which does not resemble any known figure and which for my application are mere noise. I'll illustrate this using a descriptive MS Paint image:</p> <p><img src="https://i.stack.imgur.com/UEYtK.png" alt="enter image description here"></p> <p>Is there any tool that, <em>most of the times</em> (or even less, I know this can't be 100/100) will recognize the red thing as being <em>different</em>?</p> <p>I'll deal with many shapes in a single image, so I don't mind if I carry on some red monsters along the way, as long as the majority of them is kicked out. Of course I know the indices of these regions, so I can manipulate them in MATLAB.</p> <p>Many algorithms come to mind, e.g., getting the boundary and checking for its regularity/the number of times it changes curvature/..., checking for variations in vertical length through different columns (nearly 0 for the linear feature, really high for the red stuff), ...</p> <p>However I was hoping in some help from a tool out there. It doesn't matter if this tool won't cover all cases (for example, will kick out circles), I've been very broad to get the maximum number of inputs from you guys - any tool will be inspiring and helpful (and, however, we can't expect a perfect answer for the deeper question - recognizing regular shapes - which seems more like a AI field of research). I also think that, while being broad, this is totally non-subjective so should fit in SO. Thank you.</p> <p><strong>Side note 1</strong>: I'll deal mostly with elongated, extended features like the top-right one, so circles are not that relevant.</p> <p><strong>Side note 2</strong>: To be 100% clear, I would need something (be it an already existant tool, or some ideas pointed out by you) that acts on the indices of the shapes, in terms of rows-columns into the original image, or on the boundary of the shape itself.</p> <p><strong>Side note 3</strong>: Apart from tools/suggestions/ideas, you are welcomed to write down some lines of code ;) I'm getting the regions as <em>connected components</em> from <code>bwconncomp</code>.</p>
2015-03-30 16:00:34.140000+00:00
2015-04-07 08:40:49.567000+00:00
2015-03-30 16:19:17.640000+00:00
matlab|image-processing
['https://github.com/mjsottile/blobdents', 'http://arxiv.org/abs/1501.07692']
2
13,085,625
<h1>It gets stuck</h1> <h2>Stuck at the seat level</h2> <blockquote> <p>I am not sure if this will always finish or if it can get stuck even if there is a valid assignment.</p> </blockquote> <p>It may get stuck. Assume <em>k</em> = 4 tables, <em>N</em> = 16 players, 4 clans, 4 people in each clan. Let <em>A1</em> thrugh <em>A4</em> be the players in clan <em>A</em> and similarly for the other clans. Then the following is an example of a hand-crafted situation which could potentially arise from your algorithm:</p> <pre><code>Round 1: Table 1: A1, B1, C1, D1 Table 2: A2, B2, C2, D2 Table 3: A3, B3, C3, D3 Table 4: A4, B4, C4, D4 Round 2: Table 1: A1, B2, C3, D4 Table 2: A2, B3, C1 !!! </code></pre> <h2>Stuck at the round level?</h2> <p>An interesting question that still remains is the following: if a valid assignment for all three rounds is possible, can you find valid assignments for two rounds that preclude all valid assignments for the third round? If this were the case, you could get stuck at the round level, so when doing some backtracking algorithm, you might have to sometimes undo complete rounds in order to obtain a valid solution. I have no example where this does happen, and no strong gut feeling either way.</p> <h1>Better ways</h1> <blockquote> <p>is there a better way to do it</p> </blockquote> <p>I guess that with enough effort, one could squeeze this into the framework of some standard graph algorithm. Most likely, that graph problem would be NP hard, so there won't be polynomial time algorithms available for that either.</p> <p>Donald Knuth wrote a nice paper about <a href="http://arxiv.org/abs/cs/0011047" rel="nofollow">dancing links</a> and their application to solving the <a href="http://en.wikipedia.org/wiki/Exact_cover" rel="nofollow">exact cover problem</a>. It still uses back-tracking and exponential time in the worst case, but it keeps data structures small for those parts of the search tree where most work is done, thus speeding up the search. Maybe some of these ideas can be applied to your situation as well. Just guessing, though, don't have a particular implementation in mind yet.</p> <p>Another idea: perhaps you can adopt the concept of <em>augmenting paths,</em> as it is used when computing matchings. The idea goes something like this: if there is no unseated person available, pick an arbitrary person from some other player. If that person is compatible with the current table, move it to that table. By doing so, that other table would be short one player, and you could try to fill that gap using some unseated player. If that doesn't work, you re-seat an existing player again. You probably shouldn't start moving players right away. Instead, you should first try to find a full augmenting path, starting at a vacant seat and ending at an unseated person. Only after you have verified that such a chain exists, you can start moving people according to it.</p>
2012-10-26 10:57:48.943000+00:00
2012-10-26 12:02:24.950000+00:00
2012-10-26 12:02:24.950000+00:00
null
13,078,397
<p>Consider <code>N = 4k</code> players, <code>k</code> tables and a number of clans such that each member can belong to one clan. A clan can contain at most <code>k</code> players.</p> <p>We want to organize 3 rounds of a game such that, for each table that seats exactly 4 players, no 2 players sitting there are part of the same clan, and, for the later rounds, no 2 players sitting there have sat at the same table before. All players play all rounds.</p> <p>How can we do this efficiently if <code>N</code> can be about <code>~80</code> large?</p> <p>I thought of this:</p> <pre><code>for each table T: repeat until 4 players have been seated at T: pick a random player X that is not currently seated anywhere if X has not sat at the same table as anyone currently at T AND X is not from the same clan as anyone currently at T seat X at T break </code></pre> <p>I am not sure if this will always finish or if it can get stuck even if there is a valid assignment. Even if this works, is there a better way to do it?</p>
2012-10-25 22:38:07.827000+00:00
2012-10-26 13:31:52.300000+00:00
2012-10-26 07:23:01.247000+00:00
algorithm|math|random
['http://arxiv.org/abs/cs/0011047', 'http://en.wikipedia.org/wiki/Exact_cover']
2
4,110,871
<p>These two papers could possibly come in useful</p> <ul> <li><a href="http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0063v1.pdf" rel="nofollow">Andrej Dujella - A Variant of Wiener's Attack on RSA</a></li> <li><a href="http://arxiv.org/PS_cache/cs/pdf/0402/0402052v1.pdf" rel="nofollow">Andrej Dujella - Continued Fractions and RSA with small secret exponent</a></li> </ul> <p>Came across them when I was doing some basic research on continued fractions.</p>
2010-11-05 23:07:07.300000+00:00
2010-11-08 12:47:14.267000+00:00
2010-11-08 12:47:14.267000+00:00
null
4,078,902
<p>Given the following RSA keys, how does one go about determining what the values of <em>p</em> and <em>q</em> are?</p> <pre><code>Public Key: (10142789312725007, 5) Private Key: (10142789312725007, 8114231289041741) </code></pre>
2010-11-02 14:58:06.563000+00:00
2021-11-15 06:49:05.093000+00:00
2014-12-18 18:42:33.360000+00:00
math|rsa|encryption-asymmetric|public-key-encryption
['http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0063v1.pdf', 'http://arxiv.org/PS_cache/cs/pdf/0402/0402052v1.pdf']
2
71,608,486
<p>You can find an <a href="https://github.com/google-research-datasets/Objectron/blob/master/objectron/dataset/iou.py" rel="nofollow noreferrer">implementation</a> for 3D-oriented bounding boxes accompanying the objectron benchmark. Their IoU metric takes two boxes as input and calculates the intersection volume based on the <a href="https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm" rel="nofollow noreferrer">Sutherland-Hodgman Polygon clipping algorithm</a>. A detailed answer is also described in the in the <a href="https://arxiv.org/pdf/2012.09988.pdf" rel="nofollow noreferrer">paper</a>.</p>
2022-03-24 19:39:48.400000+00:00
2022-03-24 19:39:48.400000+00:00
null
null
64,872,099
<p>I have two 3D bounding boxes with 9 degrees of freedom (3 translation, 3 dimensions, 3 rotations). Now I want to calculate the Intersection over Union (IoU) also known as <a href="https://en.wikipedia.org/wiki/Jaccard_index" rel="nofollow noreferrer">Jaccard Index</a> of them:</p> <p><a href="https://i.stack.imgur.com/ETH2Q.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ETH2Q.gif" alt="Intersection Volume divided by Union Volume" /></a></p> <p>I know this has <a href="https://www.researchgate.net/profile/Dingfu_Zhou/publication/335135103_IoU_Loss_for_2D3D_Object_Detection/links/5d662370299bf1f70b124e0d/IoU-Loss-for-2D-3D-Object-Detection.pdf" rel="nofollow noreferrer">already</a> <a href="http://www.cvlibs.net/publications/Geiger2012CVPR.pdf" rel="nofollow noreferrer">been</a> <a href="https://varunagrawal.github.io/bbox/bbox.html#bbox.metrics.jaccard_index_3d" rel="nofollow noreferrer">implemented</a> for the case of a single rotation (around the z-axis) using the bird's eye view, however I am looking for a solution, where the 3D bounding boxes can be rotated around all axes (x, y, z).</p> <p>So far I did not find any approaches. I would probably start by calculating all intersection points and then try to calculate volumes using tetrahedrons. Any links or hints are welcome!</p>
2020-11-17 08:57:40.410000+00:00
2022-03-24 19:39:48.400000+00:00
null
3d|computer-vision|object-detection|metrics|bounding-box
['https://github.com/google-research-datasets/Objectron/blob/master/objectron/dataset/iou.py', 'https://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm', 'https://arxiv.org/pdf/2012.09988.pdf']
3
47,274,258
<p>You will probably want to use Python to wrap a C/C++ routine, instead of using the Python implementation of RdRand(). A research paper here (<a href="http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120" rel="nofollow noreferrer">http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120</a>), or non-paywalled version here (<a href="https://arxiv.org/abs/1707.02212" rel="nofollow noreferrer">https://arxiv.org/abs/1707.02212</a>) recently showed how poor the performance of RdRand() in Python is. Even so, as the paper mentions, the RdRand and RdSeed instructions are not quite "truly" random...</p> <p>Hope that helps.</p>
2017-11-13 21:59:49.950000+00:00
2017-11-13 21:59:49.950000+00:00
null
null
41,393,847
<p>I want to make use of Intel's RDRAND feature on Windows and generate true random numbers (since Python's random module isn't so random). Is there any API in Python which can access this feature?</p> <p>I've tried installing the rdrand module mentioned in the comment below, but I keep getting an error. Log: <a href="http://pastebin.com/A2Vqsqec" rel="nofollow noreferrer">http://pastebin.com/A2Vqsqec</a></p> <p>The error seems to be thrown by these lines in rdrand.c:</p> <pre><code>#ifdef __GNUC__ #define USING_GCC 1 #elif __clang__ #define USING_CLANG 1 #else #error Only support for gcc or clang currently #error if you port to another compiler, please #error send back the patch to https://github.com/stillson/rdrand #endif </code></pre> <p>Why is this happening?</p> <p>UPDATE: I've checked and made sure that __GNUC__ is defined</p>
2016-12-30 09:34:11.867000+00:00
2018-12-01 22:27:27.163000+00:00
2018-12-01 22:24:16.697000+00:00
python|c|python-2.7|random|rdrand
['http://iopscience.iop.org/article/10.3847/1538-4357/aa7ede/meta;jsessionid=A9DA9DDB925E6522D058F3CEEC7D0B21.ip-10-40-2-120', 'https://arxiv.org/abs/1707.02212']
2
59,937,080
<p>I'd recommend quantizing your model. This would reduce the file size by about 1/4. You can try just weight quantization, or full quantization.</p> <p>Using the Python API, for only weight quantization:</p> <pre><code>import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE] tflite_quant_model = converter.convert() </code></pre> <p>For full quantization, I'd recommend using a representative dataset to reduce the accuracy loss associated with quantization.</p> <pre><code>import tensorflow as tf def representative_dataset_gen(): for _ in range(num_calibration_steps): # Get sample input data as a numpy array in a method of your choosing. yield [input] converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_dataset_gen tflite_quant_model = converter.convert() </code></pre> <p>You can also try mobilenet architectures. Quantized versions of these can be anywhere from &lt;1MB to ~5MB. You can easily find some Tensorflow implentations of mobilefacenets with a quick google search, but here's a link to the paper that started it: <a href="https://arxiv.org/abs/1804.07573" rel="nofollow noreferrer">https://arxiv.org/abs/1804.07573</a></p>
2020-01-27 18:45:42.690000+00:00
2020-01-29 20:28:26.363000+00:00
2020-01-29 20:28:26.363000+00:00
null
59,048,118
<p>Okay so in my app i am trying to implement face recognition using face net model which is converted to tflite averaging at about 93 MB approximately, however this model eventually increases size of my apk. so i am trying to find alternate ways to deal with this</p> <p>Firstly i can think of is to compress it in some way and then uncompress when app is installed</p> <p>Another way is that i should upload that model to server and after being downloaded get it loaded in my application. However i do not seem know how to implement this:</p> <p>By default face net allows implementation from assets folder</p> <pre><code> var facenet = FaceNet(getAssets()); </code></pre> <p>But in case i'm downloading that model how can i get it loaded in my application?</p> <p>Here is my face net intilization code:</p> <pre><code> public FaceNet(AssetManager assetManager) throws IOException { tfliteModel = loadModelFile(assetManager); tflite = new Interpreter(tfliteModel, tfliteOptions); imgData = ByteBuffer.allocateDirect( BATCH_SIZE * IMAGE_HEIGHT * IMAGE_WIDTH * NUM_CHANNELS * NUM_BYTES_PER_CHANNEL); imgData.order(ByteOrder.nativeOrder()); } private MappedByteBuffer loadModelFile(AssetManager assetManager) throws IOException { AssetFileDescriptor fileDescriptor = assetManager.openFd(MODEL_PATH); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } </code></pre> <p>My FaceNet Class:</p> <pre><code>public class FaceNet { private static final String MODEL_PATH = "facenet.tflite"; private static final float IMAGE_MEAN = 127.5f; private static final float IMAGE_STD = 127.5f; private static final int BATCH_SIZE = 1; private static final int IMAGE_HEIGHT = 160; private static final int IMAGE_WIDTH = 160; private static final int NUM_CHANNELS = 3; private static final int NUM_BYTES_PER_CHANNEL = 4; private static final int EMBEDDING_SIZE = 512; private final int[] intValues = new int[IMAGE_HEIGHT * IMAGE_WIDTH]; private ByteBuffer imgData; private MappedByteBuffer tfliteModel; private Interpreter tflite; private final Interpreter.Options tfliteOptions = new Interpreter.Options(); public FaceNet(AssetManager assetManager) throws IOException { tfliteModel = loadModelFile(assetManager); tflite = new Interpreter(tfliteModel, tfliteOptions); imgData = ByteBuffer.allocateDirect( BATCH_SIZE * IMAGE_HEIGHT * IMAGE_WIDTH * NUM_CHANNELS * NUM_BYTES_PER_CHANNEL); imgData.order(ByteOrder.nativeOrder()); } private MappedByteBuffer loadModelFile(AssetManager assetManager) throws IOException { AssetFileDescriptor fileDescriptor = assetManager.openFd(MODEL_PATH); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } private void convertBitmapToByteBuffer(Bitmap bitmap) { if (imgData == null) { return; } imgData.rewind(); bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight()); // Convert the image to floating point. int pixel = 0; for (int i = 0; i &lt; IMAGE_HEIGHT; ++i) { for (int j = 0; j &lt; IMAGE_WIDTH; ++j) { final int val = intValues[pixel++]; addPixelValue(val); } } } private void addPixelValue(int pixelValue){ //imgData.putFloat((((pixelValue &gt;&gt; 16) &amp; 0xFF) - IMAGE_MEAN) / IMAGE_STD); //imgData.putFloat((((pixelValue &gt;&gt; 8) &amp; 0xFF) - IMAGE_MEAN) / IMAGE_STD); //imgData.putFloat(((pixelValue &amp; 0xFF) - IMAGE_MEAN) / IMAGE_STD); imgData.putFloat(((pixelValue &gt;&gt; 16) &amp; 0xFF) / 255.0f); imgData.putFloat(((pixelValue &gt;&gt; 8) &amp; 0xFF) / 255.0f); imgData.putFloat((pixelValue &amp; 0xFF) / 255.0f); } public void inspectModel(){ String tag = "Model Inspection"; Log.i(tag, "Number of input tensors: " + String.valueOf(tflite.getInputTensorCount())); Log.i(tag, "Number of output tensors: " + String.valueOf(tflite.getOutputTensorCount())); Log.i(tag, tflite.getInputTensor(0).toString()); Log.i(tag, "Input tensor data type: " + tflite.getInputTensor(0).dataType()); Log.i(tag, "Input tensor shape: " + Arrays.toString(tflite.getInputTensor(0).shape())); Log.i(tag, "Output tensor 0 shape: " + Arrays.toString(tflite.getOutputTensor(0).shape())); } private Bitmap resizedBitmap(Bitmap bitmap, int height, int width){ return Bitmap.createScaledBitmap(bitmap, width, height, true); } private Bitmap croppedBitmap(Bitmap bitmap, int upperCornerX, int upperCornerY, int height, int width){ return Bitmap.createBitmap(bitmap, upperCornerX, upperCornerY, width, height); } private float[][] run(Bitmap bitmap){ bitmap = resizedBitmap(bitmap, IMAGE_HEIGHT, IMAGE_WIDTH); convertBitmapToByteBuffer(bitmap); float[][] embeddings = new float[1][512]; tflite.run(imgData, embeddings); return embeddings; } public double getSimilarityScore(Bitmap face1, Bitmap face2){ float[][] face1_embedding = run(face1); float[][] face2_embedding = run(face2); double distance = 0.0; for (int i = 0; i &lt; EMBEDDING_SIZE; i++){ distance += (face1_embedding[0][i] - face2_embedding[0][i]) * (face1_embedding[0][i] - face2_embedding[0][i]); } distance = Math.sqrt(distance); return distance; } public void close(){ if (tflite != null) { tflite.close(); tflite = null; } tfliteModel = null; } } </code></pre>
2019-11-26 09:53:21.833000+00:00
2020-01-29 20:28:26.363000+00:00
2019-12-02 06:42:06.387000+00:00
java|android|kotlin|assets|tensorflow-lite
['https://arxiv.org/abs/1804.07573']
1
58,990,824
<p>You can use <code>keras.metrics.top_k_categorical_accuracy</code> for calculating accuracy.<br><br> But this one is accuracy metric. I don't think there is any inbuilt top_k loss function in TensorFlow or Keras as of now.<br> A loss function should be differentiable to work with gradient based learning methods. While <code>top_k</code> is not a differentiable function. Just like accuracy metric.<br> Hence it can be used as accuracy metric but not as learning objective. So you won't find any inbuilt method for this, however there are other research papers aiming to solve this problems. You might want to have a look at <a href="https://arxiv.org/abs/1802.07595" rel="nofollow noreferrer">Learning with Average Top-k Loss</a> and <a href="https://arxiv.org/abs/1705.08826" rel="nofollow noreferrer">Smooth Loss Functions for Deep Top-k Classification</a>.</p>
2019-11-22 09:07:02.247000+00:00
2019-11-22 09:07:02.247000+00:00
null
null
58,990,039
<p>I'm design a classification model. </p> <p>I have a problem, there are many categories which has similar features. I think best options is re-generate category hierarchy, but those are fixed.</p> <p>So, I focused on 3-best accuracy, instead of 1-best accuracy. </p> <p>I want to defined a loss function for 3-best accuracy.</p> <p>I don't care where is the answer in position 1 - 3. </p> <p>Is there any good loss function for that? of How can I define it? </p>
2019-11-22 08:11:39.500000+00:00
2019-11-22 09:07:02.247000+00:00
null
python|tensorflow|keras
['https://arxiv.org/abs/1802.07595', 'https://arxiv.org/abs/1705.08826']
2
60,706,791
<p>If you had access to the actual voice recordings, you could apply some augmentation techniques <a href="https://towardsdatascience.com/data-augmentation-for-speech-recognition-e7c607482e78" rel="nofollow noreferrer">used in speech recognition</a> and then re-extract the features such as fundamental frequency. However, since you're dealing directly with the features, augmentation is more tricky. It is possible to generate synthetic samples by interpolating between existing ones or adding noise, but since the features are highly correlated, you need a smart way of doing that (see <a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">this paper</a> for a simple approach and <a href="https://arxiv.org/pdf/1702.05538.pdf" rel="nofollow noreferrer">this one</a> for a more advanced technique). If you have a class imbalance problem, you can try simply over- or under-sampling.</p>
2020-03-16 13:21:08.523000+00:00
2020-03-16 13:21:08.523000+00:00
null
null
60,706,464
<p>I am looking for an algorithm and-or tutorial about data augmentation but all of them belong to image augmentation , is it possible to do that in other datasets ? I am working on parkinsons data set (<a href="https://archive.ics.uci.edu/ml/datasets/parkinsons" rel="nofollow noreferrer">https://archive.ics.uci.edu/ml/datasets/parkinsons</a>) and want to create an example of data aug with python , is this possible ? or should i use smt like mnist/fmnist ?</p>
2020-03-16 12:58:57.287000+00:00
2020-03-16 13:21:08.523000+00:00
null
python|machine-learning|data-augmentation
['https://towardsdatascience.com/data-augmentation-for-speech-recognition-e7c607482e78', 'https://arxiv.org/pdf/1106.1813.pdf', 'https://arxiv.org/pdf/1702.05538.pdf']
3
66,696,282
<p>You don't have to worry about data augmentation too much because it has a number of data augmentation techniques that it applies to help improve model performance and generalization.</p> <p>A few of the data augmentation techniques used by YoloV4 are <code>CutMix, Blurring, Class label smoothing, Mosaic data augmentation, Self-Adversarial Training</code>. You can learn more about the different types and impact of various augmentation methods from the YoloV4 paper <a href="https://arxiv.org/pdf/2004.10934.pdf" rel="nofollow noreferrer">here</a></p> <p><a href="https://github.com/AlexeyAB/darknet/wiki/CFG-Parameters-in-the-%5Bnet%5D-section" rel="nofollow noreferrer">Here</a> is the list of all the data augmentation techniques used in <a href="https://github.com/AlexeyAB/darknet" rel="nofollow noreferrer">this</a> GitHub implementation of YoloV4</p>
2021-03-18 17:37:03.423000+00:00
2021-03-18 17:37:03.423000+00:00
null
null
66,569,240
<p>I want to train an object detection model using YOLO v4. I have a folder containing jpg images with the bounding boxes annotations in a txt file. I don't have much data so I decided to do some data augmentation on my data. I faced the following problems :</p> <ol> <li>I have tried Roboflow so I can have directly the bounding boxes in txt files, but the problem is that Roboflow applies randomly data augmentation and sometimes it gives the same picture or applies only a small change.</li> <li>I have tried Albumentations but I had problems with the bounding boxes, I tried the Pascal voc format and it worked but I didn't know how to do it automatically for the whole dataset.</li> </ol> <p>Is there any other solutions or suggestions, I will be grateful. Thank you</p>
2021-03-10 16:46:47.730000+00:00
2021-03-18 17:37:03.423000+00:00
null
python|object-detection|yolo|bounding-box|data-augmentation
['https://arxiv.org/pdf/2004.10934.pdf', 'https://github.com/AlexeyAB/darknet/wiki/CFG-Parameters-in-the-%5Bnet%5D-section', 'https://github.com/AlexeyAB/darknet']
3
66,639,840
<p>If you refer to the <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">original paper</a>, they use linear activation for the final layer. In section &quot;2.2. Training&quot; you can find:</p> <blockquote> <p>We use a linear activation function for the final layer and all other layers use the following leaky rectified linear activation...</p> </blockquote>
2021-03-15 14:26:18.607000+00:00
2021-03-15 14:26:18.607000+00:00
null
null
57,443,049
<p>I have built a simple YOLO localization model in Keras like,</p> <pre><code>model_layers = [ keras.layers.Conv2D( 32 , input_shape=( input_dim , input_dim , 3 ) , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 32 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.MaxPooling2D( pool_size=( 2 , 2 ) , strides=2 ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.MaxPooling2D( pool_size=( 2 , 2 ) , strides=2 ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.MaxPooling2D( pool_size=( 2 , 2 ) , strides=2 ), keras.layers.Conv2D( 128 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 128 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 64 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 32 , kernel_size=( 3 , 3 ) , strides=1 , activation='relu' ), keras.layers.Conv2D( 8 , kernel_size=( 3 , 3 ) , strides=1 ), ] model = keras.models.Sequential( model_layers ) model.compile( loss=yolo_keras_loss , optimizer=keras.optimizers.Adam( lr=0.0001 ) ) model.summary() </code></pre> <p>As observed, the last layer's activation function is 'linear'. </p> <blockquote> <p>But with regards to YOLO's output, all the values ( confidence score, bounding box coordinates and class probabilities ) are normalized. So should I use a sigmoid activation function or a linear activation function?</p> </blockquote> <p>I cannot find the output layer's activation function in any of resources concerning YOLO.</p>
2019-08-10 14:36:02.257000+00:00
2021-07-13 08:43:23.887000+00:00
2021-07-13 08:43:23.887000+00:00
tensorflow|keras|classification|object-detection|yolo
['https://arxiv.org/pdf/1506.02640.pdf']
1
57,428,976
<p>Finding triplets to train a Siamese neural network with the triplet loss function can be done in several ways. The original <a href="https://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet</a> paper describes the importance of hard triplets (hard positives, <em>positives</em> such that <code>argmax||f(anchor)-f(positive)||^2</code> and hard negatives, <em>negatives</em> such that <code>argmin||f(anchor)-f(negative)||^2</code> where f is the embedding from the neural network. </p> <p>However, in one of my Siamese networks, I selected (anchor,positive,negative) triplets randomly and it turns out to have a good classification accuracy. So you could try random triplet selection first, as hard-triplet selection is generally computationally expensive and requires a CPU cluster.</p> <p>I hope you have labelled all the images in the dataset and label should reflect which person that particular image refers to. For an example, if you have 5 images of person A, the labels should look like <code>(A_1.jpg, A_2.jpg,...A_5.jpg)</code> or you should have a separate directory for each person. You could select an image from one directory randomly as the anchor, select an image from the same directory as the positive and an image from a different directory as the negative. Bundle this images in triplet format <code>(anchor,positive,negative)</code> and repeat the process to create a batch. And there you have a training batch of images.</p> <p>I just covered the basic procedure of doing it, however, if you're looking for an example code, <a href="https://towardsdatascience.com/one-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d" rel="nofollow noreferrer">this</a> tutorial may help you to create batches of triplets to a train the network.</p>
2019-08-09 11:20:55.360000+00:00
2019-08-09 11:26:36.477000+00:00
2019-08-09 11:26:36.477000+00:00
null
57,428,524
<p>I am trying to apply one-shot learning for face-recognition. I have several pictures of different people in my dataset directory and want to train my model but the problem is I can't figure out how to provide anchor-positive and anchor-negative pairs from directory of dataset.</p> <p>I have build a custom convNet model and defined triplet-loss(as described in deeplearning.ai course). </p> <p>My model</p> <pre class="lang-py prettyprint-override"><code>model = models.Sequential() model.add(layers.Conv2D(16, (3,3), (3,3), activation='relu', input_shape=(384, 384, 1))) model.add(layers.MaxPooling2D((2,2))) model.add(layers.BatchNormalization()) for t in range(2): model.add(layers.Conv2D(32, (1,1), (1,1), activation='relu')) model.add(layers.Conv2D(32, (3,3), (1,1), padding='same', activation='relu')) model.add(layers.Conv2D(64, (1,1), (1,1), activation='relu')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D((2,2))) for t in range(3): model.add(layers.Conv2D(64, (1,1), (1,1), activation='relu')) model.add(layers.Conv2D(64, (3,3), (1,1), padding='same', activation='relu')) model.add(layers.Conv2D(128, (1,1), (1,1), activation='relu')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D((2,2))) for t in range(4): model.add(layers.Conv2D(128, (1,1), (1,1), activation='relu')) model.add(layers.Conv2D(128, (3,3), (1,1), padding='same', activation='relu')) model.add(layers.Conv2D(256, (1,1), (1,1), activation='relu')) model.add(layers.BatchNormalization()) model.add(layers.MaxPooling2D((2,2))) for t in range(3): model.add(layers.Conv2D(256, (1,1), (1,1), activation='relu')) model.add(layers.Conv2D(256, (3,3), (1,1), padding='same', activation='relu')) model.add(layers.Conv2D(512, (1,1), (1,1), activation='relu')) model.add(layers.BatchNormalization()) model.add(layers.AveragePooling2D((4,4))) model.add(layers.Flatten()) model.add(layers.Dense(128)) model.add(layers.Lambda(lambda x: backend.l2_normalize(x,axis=1))) </code></pre> <p>Triplet_loss</p> <pre class="lang-py prettyprint-override"><code>def triplet_loss(y_true, y_pred, alpha = 0.3): """ Implementation of the triplet loss as defined by formula (3) Arguments: y_pred -- python list containing three objects: anchor -- the encodings for the anchor images, of shape (None, 128) positive -- the encodings for the positive images, of shape (None, 128) negative -- the encodings for the negative images, of shape (None, 128) Returns: loss -- real number, value of the loss """ anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2] # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1 pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1) # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1 neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1) # Step 3: subtract the two previous distances and add alpha. basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha) # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples. loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0)) return loss </code></pre> <p>Model Compilation</p> <pre class="lang-py prettyprint-override"><code>model.compile(optimizer='adam',loss='triplet_loss',metrics=['accuracy']) </code></pre> <p>Please help me in making anchor-positive and anchor-negative pairs for training. I don't have any idea how to handle dataset directory in this regard!</p>
2019-08-09 10:50:43.877000+00:00
2019-08-09 11:31:10.423000+00:00
null
python|tensorflow|keras|deep-learning|conv-neural-network
['https://arxiv.org/abs/1503.03832', 'https://towardsdatascience.com/one-shot-learning-with-siamese-networks-using-keras-17f34e75bb3d']
2
3,224,033
<p>I'm not sure from the wording of your question if you are interested in set union or concatenation or both, or if you're only interested in persistent data structures as are common in OCaml or also in ephemeral structures.</p> <p>An implementation of <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.4454" rel="noreferrer">red-black trees with fingers is described by Heather D. Booth in a chapter from her thesis</a>. With fingers, a red-black tree of size n can be split into two trees of size p and q in amortized O(lg (min (p,q))) time and two red-black trees of size p and q can be concatenated in the same bound. Additionally, an element can be added or deleted at either end of an rb tree in amortized O(1) time. With these operations, it is possible to achieve amortized O(p lg(q/p)) time set union (for p &lt; q), which is information-theoretically optimal. Perhaps the key idea to get these bounds is the reversal of the child pointers on the left and right spines.</p> <p>The bounds above are amortized in the traditional sense. For a functional language like OCaml, one might wish to have bounds that apply when a data structure is used persistently. I do not think Booth's description will achieve all of those bounds when the trees are used persistently. For example, insertion at a finger can take &omega;(1) recolorings. This might be solved via the <a href="http://www.cs.cmu.edu/~sleator/papers/making-data-structures-persistent.pdf" rel="noreferrer">lazy recolorings discussed in Driscoll et al.'s "Making Data Structures Persistent"</a>.</p> <p>On the other hand, I think Booth's analysis might show that concatenation is still O(lg (max (p,q))) even when used persistently. I'm less optimistic about the set union bound.</p> <p>Set operations with asymptotically optimal time bounds are possible in a functional setting. Those <a href="http://www.soi.city.ac.uk/~ross/papers/FingerTree.pdf" rel="noreferrer">described by Hinze &amp; Paterson</a> achieve the bounds in an amortized (but persistent) sense, the <a href="http://www.cs.cmu.edu/afs/cs/project/pscico/pscico/papers/fingertrees/main.pdf" rel="noreferrer">treaps described by Blandford &amp; Blelloch achieve the bounds in a randomized sense</a>, and those <a href="http://www.math.tau.ac.il/~haimk/papers/loglog23.ps" rel="noreferrer">described by Kaplan &amp; Tarjan</a> achieve them in worst-case time. The latter also offer O(lg lg (min(p,q))) concatenation, though Hinze &amp; Paterson are dubious of that claim. These trees are not a direct answer to your question, which is specific to red-black trees, but they hopefully give a flavor of what is possible, and the H&amp;P paper includes code, and <a href="http://mattam.org/research/russell/fingertrees.en.html" rel="noreferrer">has been verified correct using Coq</a>, which can extract to OCaml code.</p> <p>Two more pointers you might be interested in: <a href="http://www.cs.au.dk/~gerth/pub/esa06trees.html" rel="noreferrer">Brodal et al. presented search trees with O(lg n) find, insert, and delete and O(1) concat even in a functional setting</a>. Additionally, <a href="http://www.cs.purdue.edu/research/technical_reports/1993/TR%2093-035.pdf" rel="noreferrer">Atallah et al. claim to describe a red-black tree that has amortized O(1) concat (presumably ephemerally only)</a>, but <a href="http://www.ics.uci.edu/~goodrich/pubs/esa-maxima.pdf" rel="noreferrer">Buchsbaum and Goodrich claim that there are several flaws in that structure</a>.</p> <p>One final note about the utility of red-black trees: in one of the comments on one of the answers to this question, you say:</p> <blockquote> <p>The only advantage of a red-black tree is that the auxiliary information (red or black) is only 1-bit per branch. By adding height, you've lost that advantage and might as well just use a height-balanced tree instead.</p> </blockquote> <p>There are other advantages as well. For instance, some data structures used in computational geometry are based on binary search trees but have a high cost of tree rotation. <a href="http://dx.doi.org/10.1016/0020-0190(83)90099-6" rel="noreferrer">Red-black trees can be rebalanced in at most 3 rotations per insert and delete</a>, while <a href="https://arxiv.org/abs/1506.03528" rel="noreferrer">AVL trees can take &Omega;(lg n) rotations for these operations</a>. <a href="http://archive.cs.uu.nl/pub/RUU/CS/techreps/CS-2001/2001-09.pdf" rel="noreferrer">As Ralf Hinze noticed</a>, <a href="http://www.eecs.usma.edu/webs/people/okasaki/jfp99.ps" rel="noreferrer">Okasaki's rebalancing scheme for red-black trees</a> (code available in <a href="http://www.eecs.usma.edu/webs/people/okasaki/pfds-sml.tar.gz" rel="noreferrer">ML</a>, <a href="http://www.eecs.usma.edu/webs/people/okasaki/pfds-haskell.tar.gz" rel="noreferrer">Haskell</a>, <a href="http://www.eecs.usma.edu/webs/people/okasaki/sigcse05/index.html" rel="noreferrer">Java, and Ada</a>) does not offer the same bound, and can end up doing &Omega;(lg n) rotations on insertion. (Okasaki does not present deletion.)</p> <p>Additionally, height-balanced search trees (and even AVL trees) can be stored so as to use only one bit of balance information per node. Some trees have only two possible balance positions at each node, like one-sided height-balanced trees, but trees with up to four possible balance positions per node can store one bit of balance information in each child, as <a href="http://linkinghub.elsevier.com/retrieve/pii/0020019078900054" rel="noreferrer">initially explained by Brown</a> and later <a href="http://www.cs.princeton.edu/~sssix/papers/rb-trees.pdf" rel="noreferrer">expanded upon by Haeupler et al.</a></p> <p><b>Edit:</b></p> <p>In answer to your specific query at the end of your question, here is a description of an algorithm for concatenating two red-black trees. It takes O(lg(max(|L|,|R|))) time, which is too long to get the asymptotically optimal union time I describe above. For comparison, I expect that <a href="http://caml.inria.fr/cgi-bin/viewcvs.cgi/ocaml/trunk/stdlib/set.ml?rev=6694&amp;view=markup" rel="noreferrer">the "join" implementation for AVL sets in OCaml's stdlib</a> gets O(h1-h2) performance, where h1 is the height of the taller tree, though it actually joins two AVL trees given an element that fits between them, while the algorithm below has to find and remove that mortar element from one of its arguments. You could avoid that by only storing elements at the leaves, as in a B+ tree, but that has a space penalty of having to keep a bunch of pointers to elements in the non-leaf nodes to guide search. In any case, it wouldn't make join constant time for trees of the same height like the AVL join code in the OCaml stdlib, since you would still have to calculate the black height of each tree, as explained below.</p> <p>Given two non-empty red-black trees L and R, we will produce a new red-black tree that is the concatenation of L and R. This will take time proportional to O(lg (max(|L|,|R|))), where |L| denotes the number of nodes in L.</p> <p>First, remove the largest element from L, c. Next, find the black height of L and R. By "black height", I mean the number of black nodes on any path from the root to a leaf. By the red-black tree invariants, this is constant on all paths of any given tree. Call L's black height p and R's black height q, and assume w.l.o.g. p &le; q.</p> <p>From the root of R, follow left children until arriving at a black node R' with height p. Make a new red tree C with root element c, left child L and right child R'. Since L is a red-black tree on its own, its root is black, and the color invariants are not violated at or below C. Furthermore, the black height of C is p.</p> <p>However, we cannot simply splice C back into R in place of R'. First, if p = q, R' is R, yet C has a red root. In this case, simply recolor the root of C black. This is your new concatenated tree.</p> <p>Second, if R' is not the root, it may have a red parent. Red parents are not permitted to have red children, so we must rebalance. Here we just apply Okasaki's rebalancing scheme all the way up the spine between R' (now replaced with C) and the root of R.</p> <p>There are two possible cases. If C has no grandparent, color C's parent black. The tree is now valid.</p> <p>If C has a grandparent, it must be black and of black height p+1, since C's parent is red. Replace C's grandparent with a new red tree, the root of which is the root of C's parent, the left child of which is C, recolored black, and the right child of which is a black tree that consists of C's sibling, C's grandparent's root, and C's uncle, in that order. This doesn't increase the black height of C's grandparent, but it changes its color to red, which might make it a root or a red child of a red parent, so we have to rebalance again, and so on all the way up the tree</p> <ul> <li>Finding the black height of both trees : O(lg |L|) + O(lg |R|)</li> <li>Tracing down R to the right spot: O(lg |R| - lg |L|)</li> <li>Rotations all the way back up to the root: O(lg |R| - lg |L|)</li> </ul> <p>None of these is greater than O(lg |R| + lg |L|) = O(lg (max(|L|,|R|)))</p> <p>To make this O(lg (min(|L|,|R|))), first reverse the spine pointers. Then you don't need the black height of the larger tree, you only need to count black spine nodes until one tree runs out of spine. Then, use the original (not Okasaki's) rebalancing scheme to make sure you only rebalance O(1) nodes. Finally, mark the rest of the spine that doesn't need rebalancing for lazy recoloring if necessary later.</p>
2010-07-11 17:45:54.517000+00:00
2017-01-21 23:16:35.663000+00:00
2017-01-21 23:16:35.663000+00:00
null
3,176,863
<p>The OCaml standard library has a wonderful <code>Set</code> implementation that uses a very efficient divide-and-conquer algorithm to compute the <code>union</code> of two sets. I believe it takes whole subtrees (not just single elements) from one set and inserts them into the other set, rebalancing when necessary.</p> <p>I'm wondering if this requires the height information that is kept in the AVL tree that OCaml uses or if this is also possible with red-black trees. For example, is it possible to concatenate a pair of red-black trees more efficiently than simply iterating over the second tree appending its elements to the end of the first tree?</p>
2010-07-05 01:49:03.830000+00:00
2017-01-21 23:16:35.663000+00:00
2017-01-19 09:34:06.597000+00:00
algorithm|data-structures|red-black-tree
['http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.4454', 'http://www.cs.cmu.edu/~sleator/papers/making-data-structures-persistent.pdf', 'http://www.soi.city.ac.uk/~ross/papers/FingerTree.pdf', 'http://www.cs.cmu.edu/afs/cs/project/pscico/pscico/papers/fingertrees/main.pdf', 'http://www.math.tau.ac.il/~haimk/papers/loglog23.ps', 'http://mattam.org/research/russell/fingertrees.en.html', 'http://www.cs.au.dk/~gerth/pub/esa06trees.html', 'http://www.cs.purdue.edu/research/technical_reports/1993/TR%2093-035.pdf', 'http://www.ics.uci.edu/~goodrich/pubs/esa-maxima.pdf', 'http://dx.doi.org/10.1016/0020-0190(83)90099-6', 'https://arxiv.org/abs/1506.03528', 'http://archive.cs.uu.nl/pub/RUU/CS/techreps/CS-2001/2001-09.pdf', 'http://www.eecs.usma.edu/webs/people/okasaki/jfp99.ps', 'http://www.eecs.usma.edu/webs/people/okasaki/pfds-sml.tar.gz', 'http://www.eecs.usma.edu/webs/people/okasaki/pfds-haskell.tar.gz', 'http://www.eecs.usma.edu/webs/people/okasaki/sigcse05/index.html', 'http://linkinghub.elsevier.com/retrieve/pii/0020019078900054', 'http://www.cs.princeton.edu/~sssix/papers/rb-trees.pdf', 'http://caml.inria.fr/cgi-bin/viewcvs.cgi/ocaml/trunk/stdlib/set.ml?rev=6694&view=markup']
19
6,187,970
<p>Sounds like you might want to read <a href="http://arxiv.org/abs/0907.0884" rel="noreferrer">Self-Improving Algorithms</a>: they achieve an eventual optimal expected running time for <em>arbitrary</em> input distributions.</p> <blockquote> <p>We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.</p> </blockquote> <p>If you already know your input distribution is approximately Gaussian, then perhaps another approach would be more efficient in terms of space complexity, but in terms of expected running time this is a rather wonderful result.</p>
2011-05-31 13:02:14.013000+00:00
2011-05-31 13:02:14.013000+00:00
null
null
6,166,546
<p>It just occurred to me, if you know something about the distribution (in the statistical sense) of the data to sort, the performance of a sorting algorithm might benefit if you take that information into account.</p> <p>So my question is, are there any sorting algorithms that take into account that kind of information? How good are they?</p> <p>An example to clarify: if you know the distribution of your data to be Gaussian, you could estimate mean and average on the fly as you process the data. This would give you an estimate of the final position of each number, which you could use to place them close to their final position.</p> <p>I'm pretty surprised the answer isn't a wiki link to a thourough page discussing this issue. Isn't this a very common case (the Gaussian case, for example)?</p> <p>I'm adding a bounty to this question, because I'm looking for definite answers with sources, not speculation. Something like &quot;in the case of gaussian distributed data, XYZ algorithm is the fastest on average, as was proved by Smith et al. [1]&quot;. However any additional information is welcome.</p>
2011-05-29 07:46:27.590000+00:00
2022-02-12 03:03:30.120000+00:00
2022-02-12 03:03:30.120000+00:00
algorithm|performance|sorting|statistics|complexity-theory
['http://arxiv.org/abs/0907.0884']
1
54,182,700
<p>Gradient ascent/descent can only find <em>local</em> optima, in order to find "global" optima you just run that procedure many times with random initialization, and take the best value you find.</p> <p>You can do the same in your situation as well: take random initial points and follow the gradient, stopping at convergence or when you step outside the domain.</p> <p>You can make this a bit faster by dynamically restricting the domain when you step out of it. For example, suppose you are maximizing between -10 and 10, and choose 6 as an initial point; you run gradient ascent and reach 10. You can now exclude the interval [6,10] from the random initialization, since you know you will end up reaching 10 and stopping there.</p> <p>But I would actually advise you to use <a href="https://arxiv.org/abs/1807.02811" rel="nofollow noreferrer">Bayesian optimization</a>. Its advantages over gradient ascent/descent are:</p> <ol> <li>does not require gradient</li> <li>made for global optimization</li> <li>allows to set bounds on parameters</li> <li>requires much fewer function evaluations</li> </ol> <p>Finally, obligatory word of caution: this problem cannot be solved <em>in the general case</em>, consider e.g. a function that equals <code>1</code> at <code>x=3.4131242351</code>, and <code>0</code> everywhere else. However, in practice, you should be fine.</p>
2019-01-14 13:43:03.390000+00:00
2019-01-15 13:28:57.970000+00:00
2019-01-15 13:28:57.970000+00:00
null
54,182,529
<p>I was looking for a <strong>numerical</strong> algorithm to find <strong>global</strong> minimum or maximum of a function in "given interval [a, b]", for example finding minimum and maximum of function </p> <blockquote> <p>f(x) = sin(x)</p> </blockquote> <p>in domain [3*pi/4, 5*pi/4].</p> <p>I know how to find global min/max of a multi-variable function using Gradient Descent or Gradient Ascend, but I'm only able to use these algorithms on entire function domain, for example when I use GD on function sin(x), it gives me -1 which is correct for domain [0, 2*pi] not [3*pi/4, 5*pi/4], any help?</p> <p>I have reached to this solution so far (code in python 2.7, language isn't important, my questions is about Algorithms):</p> <pre><code>import math import random # function def f(x): return math.sin(x) # xmin-xmax interval xmin = 3.0 * math.pi / 4.0 xmax = 5.0 * math.pi / 4.0 # find ymin-ymax steps = 10000 ymin = f(xmin) ymax = ymin for i in range(steps): x = xmin + (xmax - xmin) * float(i) / steps y = f(x) if y &lt; ymin: ymin = y if y &gt; ymax: ymax = y print ymin print ymax </code></pre> <p><strong>answer</strong></p> <p>thanks to @BlackBear, I wrote a program that does what i actually need, this function searches through interval [a, b] using Gradient Descent algorithm, on each loop it start with a new random starting point between a and b, then compares the values, at the end it returns the x where the minimum occurs</p> <pre><code>double gradientDescentInterval(const char *expression, double a, double b, double ete, double ere, double gamma, unsigned int maxiter, int mode) { /* * Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. * To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of * the gradient (or approximate gradient) of the function at the current point. * * This function searches minimum on an interval [a, b] * * ARGUMENTS: * expressions the function expression, it must be a string array like "x^2+1" * a starting point of interval [a, b] * b ending point of interval [a, b] * ete estimated true error * ere estimated relative error * gamma step size (also known as learning rate) * maxiter maximum iteration threshold * mode show process {0: no, 1: yes} * */ // fix interval reverse if (a &gt; b) { double temp = a; a = b; b = temp; } // end of if // check error thresholds if (ere &lt; 0 || ete &lt; 0) { printf("\nError: ete or ere argument is not valid\n"); Exit(); exit(EXIT_FAILURE); } // end of if // check mode if (mode != 0 &amp;&amp; mode != 1) { printf("\nError: mode argument is not valid\n"); Exit(); exit(EXIT_FAILURE); } // end of if // check maxiter to be more than zero if (maxiter &lt;= 0) { printf("Error: argument maxiter must be more than zero!\n"); Exit(); exit(EXIT_FAILURE); } // end of maxiter check // initializing variables unsigned int iter = 0, innerIter = 0; // choose an arbitrary result at midpoint between a and b to be updated later double coefficient = (b - a), result = a + coefficient / 2; double x, past_x, fx, fresult; double ete_err, ere_err; double fa = function_1_arg(expression, a); double fb = function_1_arg(expression, b); // set the seed for random number generator seed(); while (iter &lt; maxiter) { // try maxiter times to find minimum in given interval [a, b] and return lowest result // update fresult with new result fresult = function_1_arg(expression, result); // choose a random starting point x = a + coefficient * zeroToOneUniformRandom(); // set inner iter to zero before new loop innerIter = 0; // go in a loop to find a minimum with random starting point while (innerIter &lt; maxiter) { // calculate new x by subtracting the derivative of function at x multiplied by gamma from x past_x = x; x -= firstDerivative_1_arg(expression, x, DX) * gamma; fx = function_1_arg(expression, x); // calculate errors ete_err = fabs(past_x - x); ere_err = fabs(ete_err / x); if (mode) { printf("\nIn this iteration [#%d][#%d], x = %.5e f(x) = %.5e\n" "and estimated true error = %.5e and estimated relative error = %.5e,\n", iter, innerIter, x, fx, ete_err, ere_err); } // end if(mode) // Termination Criterion // if new x goes beyond interval lower than a if (x &lt; a) { if (mode) { printf("\nIn this iteration the calculated x is less than a : %.5e &lt; %f" "so minimum of the function occurs at a\n", x, a); } // end if(mode) // if fa is lower than f(result), then a is where the minimum occurs if (fa &lt; fresult) { result = a; } // end of if break; } // end of if // if new x goes beyond interval bigger than b if (x &gt; b) { if (mode) { printf("\nIn this iteration the calculated x is bigger than b : %.5e &gt; %f" "so minimum of the function occurs at b\n", x, b); } // end if(mode) // if fb is lower than f(result), then b is where the minimum occurs if (fb &lt; fresult) { result = b; } // end of if break; } // end of if // if calculated error is less than estimated true error threshold if (ete != 0 &amp;&amp; ete_err &lt; ete) { if (mode) { printf("\nIn this iteration the calculated estimated true error is less than the threshold\n" "(estimated true error) %.5e &lt; %.5e (threshold)\n" "so the calculated x is the point on domain that minimum of the function happens\n", ete_err, ete); } // end if(mode) // if fx is lower than f(result), then x is where the minimum occurs if (fx &lt; fresult) { result = x; } // end of if break; } // end of estimated true error check // if calculated error is less than estimated relative error threshold if (ere != 0 &amp;&amp; ere_err &lt; ere) { if (mode) { printf("\nIn this iteration the calculated estimated real error is less than the threshold\n" "(estimated real error) %.5e &lt; %.5e (threshold)\n" "so the calculated x is the point on domain that minimum of the function happens\n", ere_err, ere); } // end if(mode) // if fx is lower than f(result), then x is where the minimum occurs if (fx &lt; fresult) { result = x; } // end of if break; } // end of estimated relative error check innerIter++; } // end of inner while loop iter++; } // end of while loop // return result return result; } </code></pre> <p>many functions here are may seem unknown to you, they are coded in separate files. you can see them at <a href="https://github.com/MahdiBaghbani/C-Math/tree/development" rel="nofollow noreferrer">my Github repository</a>.</p>
2019-01-14 13:32:25.360000+00:00
2019-02-01 13:20:40.613000+00:00
2019-02-01 13:20:40.613000+00:00
python|algorithm|max|min|numerical-methods
['https://arxiv.org/abs/1807.02811']
1
63,066,774
<p>Any <code>tff.Computation</code> (like <code>next</code>) will always run the <em>entire</em> specified computation. If your <code>tff.templates.IterativeProcess</code> is, for example, the result of <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="nofollow noreferrer"><code>tff.learning.build_federated_averaging_process</code></a>, its <code>next</code> function will represent one round of the federated averaging algorithm.</p> <p>The federated averaging algorithm runs training for a fixed number of <em>epochs</em> (let's say 1 for simplicity) over each local dataset, and averages the model updates in a data-weighted manner at the server in order to complete a round--see <a href="https://arxiv.org/pdf/1602.05629.pdf" rel="nofollow noreferrer">Algorithm 1 in the original federated averaging paper</a> for a specification of the algorithm.</p> <p>Now, for how TFF represents and executes this algorithm. From the documentation for <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="nofollow noreferrer"><code>build_federated_averaging_process</code></a>, the <code>next</code> function has type signature:</p> <pre><code>(&lt;S@SERVER, {B*}@CLIENTS&gt; -&gt; &lt;S@SERVER, T@SERVER&gt;) </code></pre> <p>TFF's type system represents a dataset as a <a href="https://www.tensorflow.org/federated/api_docs/python/tff/SequenceType" rel="nofollow noreferrer"><code>tff.SequenceType</code></a> (this is the meaning of the <code>*</code> above), so the second element in the parameter of the type signature represents a set (technically a multiset) of datasets with elements of type <code>B</code>, placed at the clients.</p> <p>What this means in your example is as follows. You have a list of <code>tf.data.Datasets</code>, each of which represents the local data on each client--you can think of the list as representing the federated placement. In this context, TFF executing the entire specified computation means: TFF will treat every item in the list as a client to be trained on in this round. In the terms of the algorithm linked above, your list of datasets represents the set S_t.</p> <p>TFF will faithfully execute one round of the federated averaging algorithm, with the <code>Dataset</code> elements of your list representing the clients selected for this round. Training will be run for a single epoch on each client (in parallel); if datasets have different amounts of data, you are correct that the training on each client is likely to finish at different times. However, this is the correct semantics of a single round of the federated averaging algorithm, as opposed to a parameterization of a similar algorithm like <a href="https://openai.com/blog/reptile/" rel="nofollow noreferrer">Reptile</a>, which runs for a fixed number of steps for each client.</p> <p>If you wish to select a subset of clients to run a round of training on, this should be done <em>in Python</em>, before calling into TFF, e.g.:</p> <pre class="lang-py prettyprint-override"><code>state = iterative_process.initialize() # ls is list of datasets sampled_clients = random.sample(ls, N_CLIENTS) state = iterative_process.next(state, sampled_clients) </code></pre> <p>Generally, you can think of the Python runtime as an &quot;experiment driver&quot; layer--any selection of clients, for example, should happen at this layer. See the beginning of <a href="https://stackoverflow.com/questions/59835749/implement-data-generator-in-federated-training/59865565#59865565">this answer</a> for further detail on this.</p>
2020-07-24 04:22:51.880000+00:00
2020-07-24 04:22:51.880000+00:00
null
null
63,043,501
<p>I went through the Federated Learning tutorial. I was wondering how .next function work when we call it on an iterative process. Assuming that we have train data which is a list of lists. The outer list is a list of clients and the inner lists are batches of data for each client. Then, we create an iterative process, for example, a federated averaging process and we initialize the state. What exactly happens when we call IterativeProcess.next on this training data. Does it take from these data randomly in each round? Or just take data from each client one batch at a time?</p> <p>Assume that I have a list of tf.data.Datasets each representing a client data. How can I add some randomness to sampling from this list for the next iteration of federated learning?</p> <p>My datasets are not necessarily the same length. When one of them is completely iterated over, does this dataset waits for all other datasets to completely iterate over their data or not?</p>
2020-07-22 21:30:49.503000+00:00
2020-07-24 04:25:48.337000+00:00
null
tensorflow-federated
['https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process', 'https://arxiv.org/pdf/1602.05629.pdf', 'https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process', 'https://www.tensorflow.org/federated/api_docs/python/tff/SequenceType', 'https://openai.com/blog/reptile/', 'https://stackoverflow.com/questions/59835749/implement-data-generator-in-federated-training/59865565#59865565']
6
63,066,808
<p><strong>Does (the iterative process) take from these data randomly in each round? Or just take data from each client one batch at a time?</strong></p> <p>The TFF tutorials all use <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process" rel="noreferrer"><code>tff.learning.build_federated_averaging_process</code></a> which constructs a <a href="https://www.tensorflow.org/federated/api_docs/python/tff/templates/IterativeProcess" rel="noreferrer"><code>tff.templates.IterativeProcess</code></a> that implements the Federated Averaging algorithm (<a href="https://arxiv.org/abs/1602.05629" rel="noreferrer">McMahan et al. 2017</a>). In this algorithm each &quot;round&quot; (one invocation of <code>IterativePocess.next()</code>) processes as many batches of examples on each client as the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="noreferrer"><code>tf.data.Dataset</code></a> is setup to produce in one iteration. <a href="https://www.tensorflow.org/guide/data" rel="noreferrer">tf.data: Build TensorFlow input pipelines</a> is a great guide for <code>tf.data.Dataset</code>.</p> <p>The order in which examples are processed is determined by how the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="noreferrer"><code>tf.data.Dataset</code></a>s that were passed into the <code>next()</code> method as arguments were constructed. For example, in the <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation" rel="noreferrer">Federated Learning for Text Generation</a> tutorial's section titled <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation#load_and_preprocess_the_federated_shakespeare_data" rel="noreferrer">Load and Preprocess the Federated Shakespeare Data</a>, each client dataset is setup with preprocessing pipeline:</p> <pre class="lang-py prettyprint-override"><code>def preprocess(dataset): return ( # Map ASCII chars to int64 indexes using the vocab dataset.map(to_ids) # Split into individual chars .unbatch() # Form example sequences of SEQ_LENGTH +1 .batch(SEQ_LENGTH + 1, drop_remainder=True) # Shuffle and form minibatches .shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) # And finally split into (input, target) tuples, # each of length SEQ_LENGTH. .map(split_input_target)) </code></pre> <p>The next function will iterate over these datasets in its entirety once each invocation of <code>next()</code>, in this case since there is no call to <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#repeat" rel="noreferrer"><code>tf.data.Dataset.repeat()</code></a>, <code>next()</code> will have each client see all of its examples once.</p> <p><strong>Assume that I have a list of tf.data.Datasets each representing a client data. How can I add some randomness to sampling from this list for the next iteration of federated learning?</strong></p> <p>To add randomness to each client's dataset, one could use the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#shuffle" rel="noreferrer"><code>tf.data.Dataset.shuffle()</code></a> to first randomize the order of yielded examples, and then <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#take" rel="noreferrer"><code>tf.data.Dataset.take()</code></a> to take only a sample of that new random ordering. This could be added to the <code>preprocess()</code> method above.</p> <p>Alternatively, randomness in the <em>selection of clients</em> (e.g. randomly picking which clients participate each round) can be done using any Python library to sub-sample the list of datasets, e.g. Python's <a href="https://docs.python.org/3/library/random.html#random.sample" rel="noreferrer"><code>random.sample</code></a>.</p> <p><strong>My datasets are not necessarily the same length. When one of them is completely iterated over, does this dataset waits for all other datasets to completely iterate over their data or not?</strong></p> <p>Each dataset is only iterated over once on each invocation of <code>.next()</code>. This is in line with the synchronous communication &quot;rounds&quot; in <a href="https://arxiv.org/abs/1602.05629" rel="noreferrer">McMahan et al. 2017</a>. In some sense, yes, the datasets &quot;wait&quot; for each other.</p>
2020-07-24 04:25:48.337000+00:00
2020-07-24 04:25:48.337000+00:00
null
null
63,043,501
<p>I went through the Federated Learning tutorial. I was wondering how .next function work when we call it on an iterative process. Assuming that we have train data which is a list of lists. The outer list is a list of clients and the inner lists are batches of data for each client. Then, we create an iterative process, for example, a federated averaging process and we initialize the state. What exactly happens when we call IterativeProcess.next on this training data. Does it take from these data randomly in each round? Or just take data from each client one batch at a time?</p> <p>Assume that I have a list of tf.data.Datasets each representing a client data. How can I add some randomness to sampling from this list for the next iteration of federated learning?</p> <p>My datasets are not necessarily the same length. When one of them is completely iterated over, does this dataset waits for all other datasets to completely iterate over their data or not?</p>
2020-07-22 21:30:49.503000+00:00
2020-07-24 04:25:48.337000+00:00
null
tensorflow-federated
['https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process', 'https://www.tensorflow.org/federated/api_docs/python/tff/templates/IterativeProcess', 'https://arxiv.org/abs/1602.05629', 'https://www.tensorflow.org/api_docs/python/tf/data/Dataset', 'https://www.tensorflow.org/guide/data', 'https://www.tensorflow.org/api_docs/python/tf/data/Dataset', 'https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation', 'https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation#load_and_preprocess_the_federated_shakespeare_data', 'https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#repeat', 'https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#shuffle', 'https://www.tensorflow.org/api_docs/python/tf/data/Dataset?hl=en#take', 'https://docs.python.org/3/library/random.html#random.sample', 'https://arxiv.org/abs/1602.05629']
13
62,478,260
<p>This is a gorgeous problem, both mathematically and from the algorithmic point of view.</p> <p>Let me try to explain each part.</p> <p><strong>The mathematics</strong></p> <p>This part is better read with nicely typeset formulas. See a concise explanation <a href="https://franklinvp.github.io/2020-06-05-PolyaFooBar/" rel="noreferrer">here</a> where links to further reading are given.</p> <blockquote> <p>Let me add a reference directly here: For example Harary and Palmer's <em>Graphical enumeration</em>, Chapter 2.</p> </blockquote> <p>In short, there is a set (the whole set of <code>h x w</code>-matrices, where the entries can take any of <code>s</code> different values) and a <a href="https://en.wikipedia.org/wiki/Permutation_group" rel="noreferrer">group of permutations</a> that transforms some matrices in others. In the problem the group consists of all permutations of rows and/or columns of the matrices.</p> <p>The set of matrices gets divided into classes of matrices that can be transformed into one another. The goal of the problem is to count the number of these classes. In technical terminology the set of classes is called the <em>quotient of the set by the action of the group</em>, or <em>orbit space</em>.</p> <p>The good thing is that there is a powerful theorem (with many generalizations and versions) that does exactly that. That is <a href="https://en.wikipedia.org/wiki/P%C3%B3lya_enumeration_theorem" rel="noreferrer">Polya's enumeration theorem</a>. The theorem expresses the number of elements of the orbit space in terms of the value of a polynomial known in the area as <em>Cycle Index</em>. Now, in this problem the group is a <a href="https://en.wikipedia.org/wiki/Direct_product_of_groups#:%7E:text=In%20mathematics%2C%20specifically%20in%20group,of%20direct%20product%20in%20mathematics." rel="noreferrer">direct product</a> of two special groups <a href="https://en.wikipedia.org/wiki/Symmetric_group" rel="noreferrer">the group of all permutations</a> of <code>h</code> and <code>w</code> elements, respectively. The Cycle Index polynomials for these groups are known, and so are formulas for computing the Cycle Index polynomial of the product of groups in terms of the Cycle Index polynomials of the factors.</p> <p>Maybe a comment worth making that motivates the name of the polynomial is the following: Every permutation of elements can be seen as cycling disjoint subsets of those elements. For example, a permutation of (1,2,3,4,5) and can be (2,3,1,5,4), where we mean that 2 moved to the position of 1, 3 moved to the position of 2, 1 to the position of 3, 5 to the position of 4 and 4 to the position of 5. The effect of this permutation is the same as cycling 1-&gt; 3 -&gt; 2 and 2 back to 1, and cycling 4 -&gt; 5 and 5 back to 4. Similar to how natural numbers can be factored into a product of prime factors, each permutation can be factored into disjoint cycles. For each permutation, the cycles are unique in a sense for each permutation. The Cycle Index polynomial is computed in terms of the number of cycles of each length for each permutation in the group.</p> <p>Putting all these together we get that the total count is given by the last formula in the link.</p> <p><strong>Implementation</strong></p> <p>As seen in the final formula, we need to compute:</p> <ol> <li><a href="https://en.wikipedia.org/wiki/Partition_(number_theory)" rel="noreferrer">Partitions of a number</a></li> <li><a href="https://en.wikipedia.org/wiki/Greatest_common_divisor#:%7E:text=In%20mathematics%2C%20the%20greatest%20common,8%20and%2012%20is%204." rel="noreferrer">Greatest common divisors</a> (gcd) of many numbers.</li> <li><a href="https://en.wikipedia.org/wiki/Factorial" rel="noreferrer">Factorials</a> of many numbers.</li> </ol> <p>For these, we can do:</p> <ol> <li>To compute all partitions one can use the iterative algorithms <a href="https://arxiv.org/pdf/0909.2331.pdf" rel="noreferrer">here</a>. Already written in Python <a href="https://jeromekelleher.net/generating-integer-partitions.html" rel="noreferrer">here</a>.</li> <li>An efficient way to compute <code>gcd</code> one could use <a href="https://en.wikipedia.org/wiki/Euclidean_algorithm#:%7E:text=In%20mathematics%2C%20the%20Euclidean%20algorithm,them%20both%20without%20a%20remainder." rel="noreferrer">Euclidean algorithm</a>. However, since we are going to need the gcd of all pairs of numbers in a range and each one many times. It is better to pre-compute the full table of gcd all at once by <em>dynamic programming</em>. If <code>a&gt;b</code>, then <code>gcd(a,b)=gcd(a-b,b)</code>. This recurrence equation allows to compute gcd of larger pairs in terms of that of smaller pairs. In the table, one has the initial values <code>gcd(1,a)=gcd(a,1)=1</code> and <code>gcd(a,a)=a</code>, for all <code>a</code>.</li> <li>The same happens for factorials. The formula will require the factorials of all numbers in a range many times each. So, it is better to compute them all from the bottom up using that <code>n! = n(n-1)!</code> and <code>0!=1!=1</code>.</li> </ol> <p>An implementation in Python could look like <a href="https://github.com/franklinvp/foobar/blob/master/foobar2020/solutionProblem1.py" rel="noreferrer">this</a>. Feel free to improve it.</p>
2020-06-19 20:39:59.320000+00:00
2020-06-23 20:26:47.267000+00:00
2020-06-23 20:26:47.267000+00:00
null
61,689,832
<p>The code is running fine and is executing in python compiler online but failing all test cases in the Google Foobar </p> <pre><code>from math import factorial from collections import Counter from fractions import gcd def cycle_count(c, n): cc=factorial(n) for a, b in Counter(c).items(): cc//=(a**b)*factorial(b) return cc def cycle_partitions(n, i=1): yield [n] for i in range(i, n//2+1): for p in cycle_partitions(n-i, i): yield [i]+p def solution(w, h, s): grid=0 for cpw in cycle_partitions(w): for cph in cycle_partitions(h): m=cycle_count(cpw, w)*cycle_count(cph, h) grid+=m*(s**sum([sum([gcd(i, j) for i in cpw]) for j in cph])) return grid//(factorial(w)*factorial(h)) </code></pre> <p>Check out this code which is to be executed .Would love suggestions !!!</p>
2020-05-08 23:39:08.550000+00:00
2020-07-07 12:05:46.167000+00:00
2020-07-07 12:05:46.167000+00:00
python|python-3.x|python-2.7|data-structures
['https://franklinvp.github.io/2020-06-05-PolyaFooBar/', 'https://en.wikipedia.org/wiki/Permutation_group', 'https://en.wikipedia.org/wiki/P%C3%B3lya_enumeration_theorem', 'https://en.wikipedia.org/wiki/Direct_product_of_groups#:%7E:text=In%20mathematics%2C%20specifically%20in%20group,of%20direct%20product%20in%20mathematics.', 'https://en.wikipedia.org/wiki/Symmetric_group', 'https://en.wikipedia.org/wiki/Partition_(number_theory)', 'https://en.wikipedia.org/wiki/Greatest_common_divisor#:%7E:text=In%20mathematics%2C%20the%20greatest%20common,8%20and%2012%20is%204.', 'https://en.wikipedia.org/wiki/Factorial', 'https://arxiv.org/pdf/0909.2331.pdf', 'https://jeromekelleher.net/generating-integer-partitions.html', 'https://en.wikipedia.org/wiki/Euclidean_algorithm#:%7E:text=In%20mathematics%2C%20the%20Euclidean%20algorithm,them%20both%20without%20a%20remainder.', 'https://github.com/franklinvp/foobar/blob/master/foobar2020/solutionProblem1.py']
12
47,751,281
<p>I am not sure if Stanford NLP toolkit has a lammetizer, but you can try</p> <ul> <li>The state-of-the-art is <a href="http://qatsdemo.cloudapp.net/farasa/" rel="nofollow noreferrer">Farasa Lemmatizer</a>.</li> <li>MADAMIRA for Arabic processing</li> </ul> <p>Farasa Lemmatizer outperforms MADAMIRA Lemmatizer based on accuracy. With accuracy about 97.23% It gives +7% relative gain above MADAMIRA in lemmatization task.</p> <p>You can read more about Farasa Lemmatizer from the following link: <a href="https://arxiv.org/pdf/1710.06700.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1710.06700.pdf</a></p>
2017-12-11 10:50:59.977000+00:00
2017-12-11 10:50:59.977000+00:00
null
null
29,151,329
<p>I try to make lemmatization, ie identifying the lemma and possibly the Arabic root of a verb, for example: يتصل ==> lemma (infinitive of the verb) ==> اتصل ==> root (triliteral root / Jidr thoulathi) ==> و ص ل</p> <p>Do you think Stanford NLP can do that?</p> <p>Best Regards,</p>
2015-03-19 17:33:54.397000+00:00
2018-07-21 12:49:27.393000+00:00
2015-03-19 21:13:53.303000+00:00
nlp|stanford-nlp|lexical-analysis|stemming|lemmatization
['http://qatsdemo.cloudapp.net/farasa/', 'https://arxiv.org/pdf/1710.06700.pdf']
2
57,951,531
<p>Batch normalization is a terrible normalization choice for tasks related to semantic information being passed through the network. Look into conditional normalization methods - Adaptive Instance Normalization, etc to understand my point. Also, this paper - <a href="https://arxiv.org/abs/1903.07291" rel="nofollow noreferrer">https://arxiv.org/abs/1903.07291</a>. Batch normalization washes away all the semantic information of the network.</p>
2019-09-16 06:37:31.270000+00:00
2019-09-16 06:37:31.270000+00:00
null
null
49,161,959
<p>I'm using TensorFlow for a multi-target regression problem. Specifically, in a fully convolutional residual network for pixel-wise labeling with the input being an image and the label a mask. In my case I am using brain MR as images and the labels are mask of the tumors.</p> <p>I have accomplish a fairly decent result using my net: <a href="https://i.stack.imgur.com/cISiB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cISiB.png" alt=""></a></p> <p>Although I am sure there is still room for improvement. Therefore, I wanted to add batch normalization. I implemented it as follows:</p> <pre><code># Convolutional Layer 1 Z10 = tf.nn.conv2d(X, W_conv10, strides = [1, 1, 1, 1], padding='SAME') Z10 = tf.contrib.layers.batch_norm(Z10, center=True, scale=True, is_training = train_flag) A10 = tf.nn.relu(Z10) Z1 = tf.nn.conv2d(Z10, W_conv1, strides = [1, 2, 2, 1], padding='SAME') Z1 = tf.contrib.layers.batch_norm(Z1, center=True, scale=True, is_training = train_flag) A1 = tf.nn.relu(Z1) </code></pre> <p>for each the conv and transpose layers of my net. But the results are not what I expected. the net with batch normalization has a terrible performance. In orange is the loss of the net without batch normalization while the blue has it: <a href="https://i.stack.imgur.com/dwKfU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dwKfU.png" alt="loss"></a></p> <p>Not only the net is learning slower, the predicted labels are also very bad in the net using batch normalization.</p> <p>Does any one know why this might be the case? Could it be my cost function? I am currently using </p> <p><code>loss = tf.nn.sigmoid_cross_entropy_with_logits(logits = dA1, labels = Y) cost = tf.reduce_mean(loss)</code></p>
2018-03-07 22:04:49.880000+00:00
2020-10-18 21:26:18.133000+00:00
2018-03-07 22:30:49.423000+00:00
tensorflow|deep-learning|image-segmentation|batch-normalization
['https://arxiv.org/abs/1903.07291']
1
56,332,540
<p>Take a look at what leap frog has done with the oculus rift. I'm not sure what they're using internally to segment hand poses, but there is another paper that produces hand poses effectively. If you have a stereo camera setup, you can use this paper's methods: <a href="https://arxiv.org/pdf/1610.07214.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1610.07214.pdf</a>.</p> <p>The only promising solutions I've seen for mono camera train on large datasets. </p>
2019-05-27 21:17:11.670000+00:00
2019-05-27 23:02:18.827000+00:00
2019-05-27 23:02:18.827000+00:00
null
56,232,547
<p>I am working on a hand detection project. There are many good project on web to do this, but what I need is a specific hand pose detection. It needs a totally open palm and the whole palm face to outwards, like the image below:<br> <a href="https://i.stack.imgur.com/tS61S.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tS61S.jpg" alt="left hand will not be detected"></a></p> <p>The first hand faces to inwards, so it will not be detected, and the right one faces to outwards, it will be detected. Now I can detect hand with OpenCV. but how to tell the hand orientation?</p>
2019-05-21 06:46:25.200000+00:00
2021-01-10 13:40:00+00:00
2019-06-05 20:23:13.370000+00:00
ios|opencv|tensorflow|machine-learning|computer-vision
['https://arxiv.org/pdf/1610.07214.pdf']
1
57,205,993
<p>There is no way to avoid overlapping regions with Multiclass SVM. From <a href="https://arxiv.org/ftp/arxiv/papers/0711/0711.2914.pdf" rel="nofollow noreferrer">https://arxiv.org/ftp/arxiv/papers/0711/0711.2914.pdf</a> , you have a fairly clear explanation : </p> <blockquote> <p>As mentioned before, SVM classification is essentially a binary (two-class) classification technique, which has to be modified to handle the multiclass tasks in real world situations e.g. derivation of land cover information from satellite images. Two of the common methods to enable this adaptation include the 1A1 and 1AA techniques. The 1AA approach represents the earliest and most common SVM multiclass approach (Melgani and Bruzzone, 2004) and involves the division of an N class dataset into N two-class cases. If say the classes of interest in a satellite image include water, vegetation and built up areas, classification would be effected by classifying water against non-water areas i.e. (vegetation and built up areas) or vegetation against non-vegetative areas i.e. (water and built up areas). The 1A1 approach on the other hand involves constructing a machine for each pair of classes resulting in N(N-1)/2 machines. When applied to a test point, each classification gives one vote to the winning class and the point is labeled with the class having most votes. This approach can be further modified to give weighting to the voting process. From machine learning theory, it is acknowledged that the disadvantage the 1AA approach has over 1A1 is that its performance can be compromised due to unbalanced training datasets (Gualtieri and Cromp, 1998), however, the 1A1 approach is more computationally intensive since the results of more SVM pairs ought to be computed. In this paper, the performance of these two techniques are compared and evaluated to establish their performance on the extraction of land cover information from satellite images. </p> </blockquote> <p>So you have either N classifiers, or N(N-1)/2 classifiers, that use the whole available space. As these are (for the purpose of this explanation) independant, the only way to have the decisions boundaries not cross would be to have parallel decision boudaries, and even then the regions would be overlapping (I feel like this sentence may not be the clearest, don't hesitate to ask for more explanations if needed be). </p> <p>If you want clear non overlapping regions I suggest you use another algorithm that handles the multiclass problem better, such as KNN.</p>
2019-07-25 15:54:06.823000+00:00
2019-07-25 15:54:06.823000+00:00
null
null
57,188,568
<p>I am using the SVM in scikit-learn library for doing multiclass classification. I am wondering why these regions (decision boundaries) are overlapping (as seen in the picture below)?</p> <p><a href="https://i.stack.imgur.com/1uCGV.png" rel="nofollow noreferrer">Results</a></p> <p>Could someone please explain the difference between whether I do one-vs-one or one-vs-all in terms of the regions overlapping? I assumed one-vs-one would have clearly delineated regions with no overlap since it's maximizing the margin against each other class and that one-vs-all could have regions overlapping, but perhaps this is inaccurate because 3 of the 4 models I am training are one-vs-one, and they show overlapping regions.</p> <p>I've considered maybe it's a plotting issue as well, but could not determine any issues. If the alpha is 1, then the regions no longer overlap, but I assume this is expected since it's just covering up the other regions it overlays (which is to be expected and doesn't solve the problem).</p> <h1>Here is the function which creates, trains, and plots 4 different SVM models #(3 different kernels using SVC and 1 with LinearSVC).</h1> <pre><code>def createSVMandPlot(X,y,x_name,y_name): h = .02 # step size in the mesh # we create an instance of SVM and fit out data. We do not scale our # data since we want to plot the support vectors C = 1.0 # SVM regularization parameter svc = svm.SVC(kernel='linear', C=C).fit(X, y) #1 vs 1 rbf_svc = svm.SVC(kernel='rbf', gamma='scale', C=C).fit(X, y) #1v1 poly_svc = svm.SVC(kernel='poly', degree=3, gamma='scale',C=C).fit(X, y) #1v1 lin_svc = svm.LinearSVC(C=C).fit(X, y) #1 vs rest print(str(x_name)+' vs. '+str(y_name)) for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)): X_pred=clf.predict(X) X_pred1=np.asarray(X_pred).reshape(len(X_pred),1) A=confusion_matrix(X_pred1, y) print(A) c=0 for r in range(len(X_pred)): if X_pred[r]==y[r]: c+=1 print(str(c)+' out of 34 predicted correctly (true positives)') ============================================================================= with warnings.catch_warnings(): warnings.filterwarnings(&quot;ignore&quot;) ============================================================================= x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # title for the plots titles = ['SVC w/ linear kernel', 'LinearSVC (w/ linear kernel)', 'SVM w/ RBF kernel', 'SVM w/ poly(degree 3) kernel'] plt.pause(7) for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)): # point in the mesh [x_min, x_max]x[y_min, y_max]. plt.subplot(2, 2, i + 1) plt.subplots_adjust(wspace=0.4, hspace=0.4) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, alpha=.5) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], s=13,c=y) plt.xlabel(x_name) plt.ylabel(y_name) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() </code></pre> <p>The result from this is an image with decision bounaries/regions overlapping. It implies that if a point is at a specific 2D coordinate (x1,y1), then it could be classified as two or more classes instead of just one which is not what is desired or expected. Could someone explain what might be going on? Thanks</p> <p>EDIT: I included a picture of the results with the overlapping decision boundaries.</p>
2019-07-24 17:45:55.277000+00:00
2019-07-25 15:54:06.823000+00:00
2020-06-20 09:12:55.060000+00:00
python|matplotlib|machine-learning|scikit-learn|libsvm
['https://arxiv.org/ftp/arxiv/papers/0711/0711.2914.pdf']
1
61,818,298
<p>I've found a workaround.</p> <p>As pointed out earlier, the problem is that <code>SMOTE {smotefamily}</code>'s <code>K</code> cannot be greater than or equal to the sample size.</p> <p>I dag into the process and disovered that <code>SMOTE {smotefamily}</code> uses <code>knearest {smotefamily}</code>, which uses <code>knnx.index {FNN}</code>, which in turn uses <code>get.knn {FNN}</code>, which is what returns the error <code>warning("k should be less than sample size!")</code> that terminates the tuning process in <code>mlr3</code>.</p> <p>Now, within <code>SMOTE {smotefamily}</code>, the three arguments for <code>knearest {smotefamily}</code> are <code>P_set</code>, <code>P_set</code> and <code>K</code>. From an <code>mlr3</code> resampling perspective, data frame <code>P_set</code> is a subset of the cross-validation fold of the training data, filtered to only contain the records of the minority class. The 'sample size' that the error is referring to is the number of rows of <code>P_set</code>.</p> <p>Thus, it becomes more likely that <code>K &gt;= nrow(P_set)</code> as <code>K</code> increases via a trafo such as <code>some_integer ^ K</code> (e.g. <code>2 ^ K</code>).</p> <p>We need to ensure that <code>K</code> will never be greater than or equal to <code>P_set</code>.</p> <p>Here's my proposed solution:</p> <ol> <li>Define a variable <code>cv_folds</code> <em>before</em> defining the CV resampling strategy with <code>rsmp()</code>.</li> <li>Define the CV resampling strategy where <code>folds = cv_folds</code> in <code>rsmp()</code>, <em>before</em> defining the trafo.</li> <li>Instantiate the CV. Now, the dataset is split into training and test/valitation data in each fold.</li> <li>Find the minimum sample size of the minority class among all training data folds and set that as the threshold for <code>K</code>:</li> </ol> <pre><code>smote_k_thresh &lt;- 1:cv_folds %&gt;% lapply( function(x) { index &lt;- cv$train_set(x) aux &lt;- as.data.frame(task$data())[index, task$target_names] aux &lt;- min(table(aux)) } ) %&gt;% bind_cols %&gt;% min %&gt;% unique </code></pre> <ol start="5"> <li>Now define the trafo as follows:</li> </ol> <pre><code>param_set$trafo &lt;- function(x, param_set) { index &lt;- which(grepl('.K', names(x))) if (sum(index) != 0){ aux &lt;- round(2 ^ x[[index]]) if (aux &lt; smote_k_thresh) { x[[index]] &lt;- aux } else { x[[index]] &lt;- sample(smote_k_thresh - 1, 1) } } x } </code></pre> <p>In other words, when the trafoed <code>K</code> remains smaller than the sample size, keep it. Otherwise, set its value to be any number between 1 and <code>smote_k_thresh - 1</code>.</p> <p><strong>Implementation</strong></p> <p>Original code slightly modified to accommodate proposed tweaks:</p> <pre><code>library("mlr3learners") # additional ML algorithms library("mlr3viz") # autoplot for benchmarks library("paradox") # hyperparameter space library("OpenML") # to obtain data sets library("smotefamily") # SMOTE algorithm for imbalance correction # get list of curated binary classification data sets (see https://arxiv.org/abs/1708.03731v2) ds = listOMLDataSets( number.of.classes = 2, number.of.features = c(1, 100), number.of.instances = c(5000, 10000) ) # select imbalanced data sets (without categorical features as SMOTE cannot handle them) ds = subset(ds, minority.class.size / number.of.instances &lt; 0.2 &amp; number.of.symbolic.features == 1) ds d = getOMLDataSet(980) d # make sure target is a factor and create mlr3 tasks data = as.data.frame(d) data[[d$target.features]] = as.factor(data[[d$target.features]]) task = TaskClassif$new( id = d$desc$name, backend = data, target = d$target.features) task # Code above copied from https://mlr3gallery.mlr-org.com/posts/2020-03-30-imbalanced-data/ class_counts &lt;- table(task$truth()) majority_to_minority_ratio &lt;- class_counts[class_counts == max(class_counts)] / class_counts[class_counts == min(class_counts)] # Pipe operator for SMOTE po_smote &lt;- po("smote", dup_size = round(majority_to_minority_ratio)) # Define and instantiate resampling strategy to be applied within pipeline # Do that BEFORE defining the trafo cv_folds &lt;- 2 cv &lt;- rsmp("cv", folds = cv_folds) cv$instantiate(task) # Calculate max possible value for k-nearest neighbours smote_k_thresh &lt;- 1:cv_folds %&gt;% lapply( function(x) { index &lt;- cv$train_set(x) aux &lt;- as.data.frame(task$data())[index, task$target_names] aux &lt;- min(table(aux)) } ) %&gt;% bind_cols %&gt;% min %&gt;% unique # Random Forest learner rf &lt;- lrn("classif.ranger", predict_type = "prob") # Pipeline of Random Forest learner with SMOTE graph &lt;- po_smote %&gt;&gt;% po('learner', rf, id = 'rf') graph$plot() # Graph learner rf_smote &lt;- GraphLearner$new(graph, predict_type = 'prob') rf_smote$predict_type &lt;- 'prob' # Parameter set in data table format ps_table &lt;- as.data.table(rf_smote$param_set) View(ps_table[, 1:4]) # Define parameter search space for the SMOTE parameters param_set &lt;- ps_table$id %&gt;% lapply( function(x) { if (grepl('smote.', x)) { if (grepl('.dup_size', x)) { ParamInt$new(x, lower = 1, upper = round(majority_to_minority_ratio)) } else if (grepl('.K', x)) { ParamInt$new(x, lower = 1, upper = round(majority_to_minority_ratio)) } } } ) param_set &lt;- Filter(Negate(is.null), param_set) param_set &lt;- ParamSet$new(param_set) # Apply transformation function on SMOTE's K while ensuring it never equals or exceeds the sample size param_set$trafo &lt;- function(x, param_set) { index &lt;- which(grepl('.K', names(x))) if (sum(index) != 0){ aux &lt;- round(5 ^ x[[index]]) # Try a large value here for the sake of the example if (aux &lt; smote_k_thresh) { x[[index]] &lt;- aux } else { x[[index]] &lt;- sample(smote_k_thresh - 1, 1) } } x } # Set up tuning instance instance &lt;- TuningInstance$new( task = task, learner = rf_smote, resampling = cv, measures = msr("classif.bbrier"), param_set, terminator = term("evals", n_evals = 10), store_models = TRUE) tuner &lt;- TunerRandomSearch$new() # Tune pipe learner to find optimal SMOTE parameter values tuner$optimize(instance) # Here are the original K values instance$archive$data # And here are their transformations instance$archive$data$opt_x </code></pre>
2020-05-15 11:38:14.320000+00:00
2020-05-15 11:38:14.320000+00:00
null
null
61,772,147
<p>I'm having trouble with the trafo function for <code>SMOTE {smotefamily}</code>'s <code>K</code> parameter. In particular, when the number of nearest neighbours <code>K</code> is greater than or equal to the sample size, an error is returned (<code>warning("k should be less than sample size!")</code>) and the tuning process is terminated.</p> <p>The user cannot control <code>K</code> to be smaller than the sample size during the internal resampling process. This would have to be controlled internally so that if, for instance, <code>trafo_K = 2 ^ K &gt;= sample_size</code> for some value of <code>K</code>, then, say, <code>trafo_K = sample_size - 1</code>.</p> <p>I was wondering if there's a solution to this or if one is already on its way?</p> <pre><code>library("mlr3") # mlr3 base package library("mlr3misc") # contains some helper functions library("mlr3pipelines") # create ML pipelines library("mlr3tuning") # tuning ML algorithms library("mlr3learners") # additional ML algorithms library("mlr3viz") # autoplot for benchmarks library("paradox") # hyperparameter space library("OpenML") # to obtain data sets library("smotefamily") # SMOTE algorithm for imbalance correction # get list of curated binary classification data sets (see https://arxiv.org/abs/1708.03731v2) ds = listOMLDataSets( number.of.classes = 2, number.of.features = c(1, 100), number.of.instances = c(5000, 10000) ) # select imbalanced data sets (without categorical features as SMOTE cannot handle them) ds = subset(ds, minority.class.size / number.of.instances &lt; 0.2 &amp; number.of.symbolic.features == 1) ds d = getOMLDataSet(980) d # make sure target is a factor and create mlr3 tasks data = as.data.frame(d) data[[d$target.features]] = as.factor(data[[d$target.features]]) task = TaskClassif$new( id = d$desc$name, backend = data, target = d$target.features) task # Code above copied from https://mlr3gallery.mlr-org.com/posts/2020-03-30-imbalanced-data/ class_counts &lt;- table(task$truth()) majority_to_minority_ratio &lt;- class_counts[class_counts == max(class_counts)] / class_counts[class_counts == min(class_counts)] # Pipe operator for SMOTE po_smote &lt;- po("smote", dup_size = round(majority_to_minority_ratio)) # Random Forest learner rf &lt;- lrn("classif.ranger", predict_type = "prob") # Pipeline of Random Forest learner with SMOTE graph &lt;- po_smote %&gt;&gt;% po('learner', rf, id = 'rf') graph$plot() # Graph learner rf_smote &lt;- GraphLearner$new(graph, predict_type = 'prob') rf_smote$predict_type &lt;- 'prob' # Parameter set in data table format ps_table &lt;- as.data.table(rf_smote$param_set) View(ps_table[, 1:4]) # Define parameter search space for the SMOTE parameters param_set &lt;- ps_table$id %&gt;% lapply( function(x) { if (grepl('smote.', x)) { if (grepl('.dup_size', x)) { ParamInt$new(x, lower = 1, upper = round(majority_to_minority_ratio)) } else if (grepl('.K', x)) { ParamInt$new(x, lower = 1, upper = round(majority_to_minority_ratio)) } } } ) param_set &lt;- Filter(Negate(is.null), param_set) param_set &lt;- ParamSet$new(param_set) # Apply transformation function on SMOTE's K (= The number of nearest neighbors used for sampling new values. See SMOTE().) param_set$trafo &lt;- function(x, param_set) { index &lt;- which(grepl('.K', names(x))) if (sum(index) != 0){ x[[index]] &lt;- round(3 ^ x[[index]]) # Intentionally define a trafo that won't work } x } # Define and instantiate resampling strategy to be applied within pipeline cv &lt;- rsmp("cv", folds = 2) cv$instantiate(task) # Set up tuning instance instance &lt;- TuningInstance$new( task = task, learner = rf_smote, resampling = cv, measures = msr("classif.bbrier"), param_set, terminator = term("evals", n_evals = 3), store_models = TRUE) tuner &lt;- TunerRandomSearch$new() # Tune pipe learner to find optimal SMOTE parameter values tuner$optimize(instance) </code></pre> <p>And here's what happens</p> <pre><code>INFO [11:00:14.904] Benchmark with 2 resampling iterations INFO [11:00:14.919] Applying learner 'smote.rf' on task 'optdigits' (iter 2/2) Error in get.knnx(data, query, k, algorithm) : ANN: ERROR-------&gt; In addition: Warning message: In get.knnx(data, query, k, algorithm) : k should be less than sample size! </code></pre> <p>Session info</p> <pre><code>R version 3.6.2 (2019-12-12) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 16299) Matrix products: default locale: [1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United Kingdom.1252 [3] LC_MONETARY=English_United Kingdom.1252 LC_NUMERIC=C [5] LC_TIME=English_United Kingdom.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] smotefamily_1.3.1 OpenML_1.10 mlr3viz_0.1.1.9002 [4] mlr3tuning_0.1.2-9000 mlr3pipelines_0.1.2.9000 mlr3misc_0.2.0 [7] mlr3learners_0.2.0 mlr3filters_0.2.0.9000 mlr3_0.2.0-9000 [10] paradox_0.2.0 yardstick_0.0.5 rsample_0.0.5 [13] recipes_0.1.9 parsnip_0.0.5 infer_0.5.1 [16] dials_0.0.4 scales_1.1.0 broom_0.5.4 [19] tidymodels_0.0.3 reshape2_1.4.3 janitor_1.2.1 [22] data.table_1.12.8 forcats_0.4.0 stringr_1.4.0 [25] dplyr_0.8.4 purrr_0.3.3 readr_1.3.1 [28] tidyr_1.0.2 tibble_3.0.1 ggplot2_3.3.0 [31] tidyverse_1.3.0 loaded via a namespace (and not attached): [1] utf8_1.1.4 tidyselect_1.0.0 lme4_1.1-21 [4] htmlwidgets_1.5.1 grid_3.6.2 ranger_0.12.1 [7] pROC_1.16.1 munsell_0.5.0 codetools_0.2-16 [10] bbotk_0.1 DT_0.12 future_1.17.0 [13] miniUI_0.1.1.1 withr_2.2.0 colorspace_1.4-1 [16] knitr_1.28 uuid_0.1-4 rstudioapi_0.10 [19] stats4_3.6.2 bayesplot_1.7.1 listenv_0.8.0 [22] rstan_2.19.2 lgr_0.3.4 DiceDesign_1.8-1 [25] vctrs_0.2.4 generics_0.0.2 ipred_0.9-9 [28] xfun_0.12 R6_2.4.1 markdown_1.1 [31] mlr3measures_0.1.3-9000 rstanarm_2.19.2 lhs_1.0.1 [34] assertthat_0.2.1 promises_1.1.0 nnet_7.3-12 [37] gtable_0.3.0 globals_0.12.5 processx_3.4.1 [40] timeDate_3043.102 rlang_0.4.5 workflows_0.1.1 [43] BBmisc_1.11 splines_3.6.2 checkmate_2.0.0 [46] inline_0.3.15 yaml_2.2.1 modelr_0.1.5 [49] tidytext_0.2.2 threejs_0.3.3 crosstalk_1.0.0 [52] backports_1.1.6 httpuv_1.5.2 rsconnect_0.8.16 [55] tokenizers_0.2.1 tools_3.6.2 lava_1.6.6 [58] ellipsis_0.3.0 ggridges_0.5.2 Rcpp_1.0.4.6 [61] plyr_1.8.5 base64enc_0.1-3 visNetwork_2.0.9 [64] ps_1.3.0 prettyunits_1.1.1 rpart_4.1-15 [67] zoo_1.8-7 haven_2.2.0 fs_1.3.1 [70] furrr_0.1.0 magrittr_1.5 colourpicker_1.0 [73] reprex_0.3.0 GPfit_1.0-8 SnowballC_0.6.0 [76] packrat_0.5.0 matrixStats_0.55.0 tidyposterior_0.0.2 [79] hms_0.5.3 shinyjs_1.1 mime_0.8 [82] xtable_1.8-4 XML_3.99-0.3 tidypredict_0.4.3 [85] shinystan_2.5.0 readxl_1.3.1 gridExtra_2.3 [88] rstantools_2.0.0 compiler_3.6.2 crayon_1.3.4 [91] minqa_1.2.4 StanHeaders_2.21.0-1 htmltools_0.4.0 [94] later_1.0.0 lubridate_1.7.4 DBI_1.1.0 [97] dbplyr_1.4.2 MASS_7.3-51.4 boot_1.3-23 [100] Matrix_1.2-18 cli_2.0.1 parallel_3.6.2 [103] gower_0.2.1 igraph_1.2.4.2 pkgconfig_2.0.3 [106] xml2_1.2.2 foreach_1.4.7 dygraphs_1.1.1.6 [109] prodlim_2019.11.13 farff_1.1 rvest_0.3.5 [112] snakecase_0.11.0 janeaustenr_0.1.5 callr_3.4.1 [115] digest_0.6.25 cellranger_1.1.0 curl_4.3 [118] shiny_1.4.0 gtools_3.8.1 nloptr_1.2.1 [121] lifecycle_0.2.0 nlme_3.1-142 jsonlite_1.6.1 [124] fansi_0.4.1 pillar_1.4.3 lattice_0.20-38 [127] loo_2.2.0 fastmap_1.0.1 httr_1.4.1 [130] pkgbuild_1.0.6 survival_3.1-8 glue_1.4.0 [133] xts_0.12-0 FNN_1.1.3 shinythemes_1.1.2 [136] iterators_1.0.12 class_7.3-15 stringi_1.4.4 [139] memoise_1.1.0 future.apply_1.5.0 </code></pre> <p>Many thanks.</p>
2020-05-13 10:24:16.677000+00:00
2020-05-15 11:38:14.320000+00:00
null
r|machine-learning|mlr3
[]
0
51,218,857
<p>Yes, you should add ingame backgrounds to your models or you will never get a decent quality detection. The network needs to know the background, placement of the objects on the background, even the lighting of the objects in the scene. They all contribute to the final detection quality.</p> <p>Also the technique you use to blend the background and your images is important.</p> <p>A good read about the subject: <a href="https://arxiv.org/pdf/1702.07836.pdf" rel="nofollow noreferrer">Synthesizing Training Data for Object Detection in Indoor Scenes</a></p>
2018-07-06 23:55:02.173000+00:00
2018-07-06 23:55:02.173000+00:00
null
null
50,988,903
<p>I'm trying to train a neural net using YOLOv2 to recognize characters and objects in a video game. For input data, I took screen shots of in game assets from various angles. However, there are no backgrounds in these character models - only the models themselves. In the game, of course, there will be backgrounds behind the characters.</p> <p>Will this confuse the neural network? And if so, should I go ahead and find some sample background images from the game and apply them randomly to the input data?</p>
2018-06-22 13:20:21.930000+00:00
2018-07-06 23:55:02.173000+00:00
null
neural-network|artificial-intelligence|conv-neural-network|convolutional-neural-network|yolo
['https://arxiv.org/pdf/1702.07836.pdf']
1
46,998,587
<p>To find (or rediscover) a matrix multiplication algorithm is equivalent to solve the system of <a href="https://maths-people.anu.edu.au/~brent/pub/pub002.html" rel="nofollow noreferrer">Brent Equations</a>.</p> <p>For the <code>n*n</code> matrix product with <code>k</code> elementary multiplications, the system has <code>n^6</code> equations with a sum of <code>k</code> 3-factor products. Thus, the system is highly non-linear and has <code>3k n^2</code> unknowns. In <a href="http://www.gregorybard.com/papers/early_release.pdf" rel="nofollow noreferrer">practice</a>, it is very hard to find solutions beyond the <code>2*2</code> case. For <code>2*2</code>, there are <code>64</code> equations with seven products each. For <code>3*3</code>, there are <code>729</code> equations with <code>23</code> products each.</p> <p>Researchers have tried to discover matrix multiplication algorithms for small-factor matrices for <a href="https://pdfs.semanticscholar.org/ef42/701ae41832ab90bacc8f08fc1c2812b24490.pdf" rel="nofollow noreferrer">decades.</a> It would be possible but really more than surprising, if a neural network would beat the whole science community. </p> <p>In spite of my doubts, a <a href="https://arxiv.org/abs/1601.07227" rel="nofollow noreferrer">related research</a> succeeded to rediscover the algorithms for 2x2 and 3x3 using neural networks.</p>
2017-10-29 09:19:52.037000+00:00
2017-10-30 23:02:34.200000+00:00
2017-10-30 23:02:34.200000+00:00
null
46,984,581
<p>I've used neural networks a little, but not much. So as an attempt to increase my level of comfort, I decided to use one to play around with one of my favorite math problems: fast matrix multiplication. The standard algorithm takes O(n^3) to multiply two nxn matrices. The Strassen algorithm does it in O(n^2.8). The algorithms based off of work by Coppersmith and Winograd get down to O(n^2.373) but are impractical due to the large constant factor. There's a lot of wiggle room in between the latter two. In particular, if you can multiply two 4x4 matrices using 48 multiply operations or less, you've done better than Strassen.</p> <p>So here's my setup: I have two (pseudo-randomly generated) nxn matrices, A and B. One neural network takes NMULT linear combinations of elements of A and NMULT linear combinations of B, multiplies them together pointwise and then takes n^2 linear combinations of the output, trying to reconstruct the product AB. The loss is the sum-of-squares error over the entries. The adversarial network takes two random matrices A' and B', and outputs softsign(A' + A_offset) and softsign(B' + B_offset), with loss function = -1 * sum-of-squares error of the other network.</p> <p>I alternate between 3 steps of training: training the fast-matrix-multiply network on random input matrices A and B, training the adversarial network on random input matrices A' and B', and training the fmm network on the output of the adversarial network.</p> <p>It doesn't work. Not only can I not do better than Strassen, I can't even reproduce basic matrix multiplication! That is, if I take n = 2 and NMULT = 8, I don't get down to 0 error.</p> <p>I know there are other (potentially better) ways of solving this problem than using neural networks -- I'm only doing this as a learning method. Can anyone give me suggestions as to how to fix this?</p> <p>See code below:</p> <pre><code>import numpy as np import tensorflow as tf epochs=1000 tot_batch = 1000 learning_rate = 0.01 MATRIX_SIZE = 2 NMULTS = 8 nvals = MATRIX_SIZE * MATRIX_SIZE # These are the inputs to the adversarial NN generating our input matrices A&amp;B. a_inputs = tf.placeholder(tf.float32, [None, nvals]) b_inputs = tf.placeholder(tf.float32, [None, nvals]) adv_a_icpt = tf.Variable(tf.random_normal([nvals])) adv_b_icpt = tf.Variable(tf.random_normal([nvals])) a_vector = tf.nn.softsign(a_inputs + adv_a_icpt) b_vector = tf.nn.softsign(b_inputs + adv_b_icpt) # These are the two adversarial matrices we are multiplying; all entries # are in [-1, 1]. This makes normalizing the error easier. a_matrix = tf.reshape(a_vector, [-1, MATRIX_SIZE, MATRIX_SIZE]) b_matrix = tf.reshape(b_vector, [-1, MATRIX_SIZE, MATRIX_SIZE]) # This is the product A * B. m_matrix = tf.matmul(a_matrix, b_matrix) # This is what the fast-matrix-multiply NN will be predicting. m_vector = tf.reshape(m_matrix, [-1, nvals]) fmm_a_wts = tf.Variable(tf.random_normal([nvals, NMULTS])) fmm_b_wts = tf.Variable(tf.random_normal([nvals, NMULTS])) fmm_output_wts = tf.Variable(tf.random_normal([NMULTS, nvals])) # This is the output of the fast-matrix-multiply NN. def fmm_output(input_a_vec, input_b_vec): hidden_a_inputs = tf.matmul(input_a_vec, fmm_a_wts) hidden_b_inputs = tf.matmul(input_b_vec, fmm_b_wts) hidden_output = tf.multiply(hidden_a_inputs, hidden_b_inputs) return tf.matmul(hidden_output, fmm_output_wts) # Treat each element of the input arrays as having a variance of O(1). Then # the output array elements have a variance of O(MATRIX_SIZE). loss_adv = tf.divide( tf.losses.mean_squared_error(m_vector, fmm_output(a_vector, b_vector)), MATRIX_SIZE) abs_err_vec_adv = tf.abs(tf.subtract(m_vector, fmm_output(a_vector, b_vector))) mean_abs_err_adv = tf.reduce_mean(abs_err_vec_adv, reduction_indices=[1]) m_rand = tf.matmul(tf.reshape(a_inputs, [-1, MATRIX_SIZE, MATRIX_SIZE]), tf.reshape(b_inputs, [-1, MATRIX_SIZE, MATRIX_SIZE])) loss_rand = tf.divide( tf.losses.mean_squared_error(tf.reshape(m_rand, [-1, nvals]), fmm_output(a_inputs, b_inputs)), MATRIX_SIZE) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train_ADV = optimizer.minimize(-loss_adv, var_list=[adv_a_wts, adv_b_wts, adv_a_icpt, adv_b_icpt]) train_FMMA = optimizer.minimize(loss_adv, var_list=[fmm_a_wts, fmm_b_wts, fmm_output_wts]) train_FMMR = optimizer.minimize(loss_rand, var_list=[fmm_a_wts, fmm_b_wts, fmm_output_wts]) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) adv_batch_size = 100 fmm_batch_size = 100 for epoch in range(epochs): adv_loss = 0.0 rand_loss = 0.0 for i in range(tot_batch): # Run the fast-matrix-multiply NN training against random inputs. batch_a_inputs = np.random.uniform(low=-1., size=[fmm_batch_size, nvals]) batch_b_inputs = np.random.uniform(low=-1., size=[fmm_batch_size, nvals]) _, rerr = sess.run([train_FMMR, loss_rand], feed_dict={ a_inputs : batch_a_inputs, b_inputs : batch_b_inputs }) # Run the adversarial NN training. batch_a_inputs = np.random.normal(size=[adv_batch_size, nvals]) batch_b_inputs = np.random.normal(size=[adv_batch_size, nvals]) sess.run(train_ADV, feed_dict={ a_inputs : batch_a_inputs, b_inputs : batch_b_inputs }) # Run the fast-matrix-multiply NN training against adversarial inputs. batch_a_inputs = np.random.normal(size=[fmm_batch_size, nvals]) batch_b_inputs = np.random.normal(size=[fmm_batch_size, nvals]) _, aerr, mae = sess.run([train_FMMA, loss_adv, mean_abs_err_adv], feed_dict={ a_inputs : batch_a_inputs, b_inputs : batch_b_inputs }) adv_loss += aerr / tot_batch rand_loss += 3.0 * rerr / tot_batch if i % 200 == 0: print("Batch " + str(i) + ", mean abs error: " + str(mae[0:4])) print("Epoch: " + str(epoch) + ", rand loss = " + str(rand_loss) + ", adv loss = " + str(adv_loss)) </code></pre>
2017-10-27 23:07:53.057000+00:00
2017-10-30 23:02:34.200000+00:00
2017-10-27 23:32:05.100000+00:00
tensorflow|neural-network|matrix-multiplication
['https://maths-people.anu.edu.au/~brent/pub/pub002.html', 'http://www.gregorybard.com/papers/early_release.pdf', 'https://pdfs.semanticscholar.org/ef42/701ae41832ab90bacc8f08fc1c2812b24490.pdf', 'https://arxiv.org/abs/1601.07227']
4
53,669,927
<p>I just want to repeat the result of the article"<a href="https://arxiv.org/ftp/arxiv/papers/1805/1805.00312.pdf" rel="nofollow noreferrer">enter link description here</a>",but I cannot fix the problem "Timedistributed", I wonder What else layer was used in this article? GRU is just used to find the relation between the input data. I really want to know what is wrong with mu code ,because I am a new leaner about Kears,I have beeing coding for more than two weeks .</p>
2018-12-07 12:50:03.020000+00:00
2018-12-07 12:50:03.020000+00:00
null
null
53,666,362
<p>When I build a model CNN, the input dimension is (none,100,100,1) and the output is (400*1) but when I run my model, some errors happen, and here is my model:</p> <pre><code>visible_image1= Input(shape=(100,100,1)) conv_1=Conv2D(filters = 64, kernel_size = (5,5),padding = 'Same', )(visible_image1) BatchNor_1=BatchNormalization()(conv_1) relu_1=LeakyReLU(0.2)(BatchNor_1) pool_1=(MaxPool2D(pool_size=(3,3), strides=(3,3)))(relu_1) conv_2=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(pool_1) BatchNor_2=BatchNormalization()(conv_2) relu_2=LeakyReLU(0.2)(BatchNor_2) conv_3=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(relu_2) BatchNor_3=BatchNormalization()(conv_3) relu_3=LeakyReLU(0.2)(BatchNor_3) conv_4=Conv2D(filters = 256, kernel_size = (5,5),padding = 'Same', )(relu_3) BatchNor_4=BatchNormalization()(conv_4) conv_5=Conv2D(filters = 256, kernel_size = (5,5),padding = 'Same', )( BatchNor_3) BatchNor_5=BatchNormalization()(conv_5) add_1=Add()([BatchNor_4,BatchNor_5]) relu_4=LeakyReLU(0.2)(add_1) conv_6=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(relu_4) BatchNor_6=BatchNormalization()(conv_6) relu_5=LeakyReLU(0.2)(BatchNor_6) conv_7=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(relu_5) BatchNor_7=BatchNormalization()(conv_7) relu_6=LeakyReLU(0.2)(BatchNor_7) conv_8=Conv2D(filters = 256, kernel_size = (5,5),padding = 'Same', )(relu_6) BatchNor_8=BatchNormalization()(conv_8) add_2=Add()([BatchNor_8, relu_4]) relu_7=LeakyReLU(0.2)(add_2) conv_9=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(relu_7) BatchNor_9=BatchNormalization()(conv_9) relu_8=LeakyReLU(0.2)(BatchNor_9) conv_10=Conv2D(filters = 128, kernel_size = (5,5),padding = 'Same', )(relu_8) BatchNor_10=BatchNormalization()(conv_10) relu_9=LeakyReLU(0.2)(BatchNor_10) conv_11=Conv2D(filters = 256, kernel_size = (5,5),padding = 'Same', )(relu_9) BatchNor_11=BatchNormalization()(conv_11) add_3=Add()([BatchNor_11, relu_7]) relu_10=LeakyReLU(0.2)(add_3) time_1=TimeDistributed(Dense(256))(relu_10) # gru_1=GRU(256, return_sequences=True)(time_1) flatten_1 = Flatten()(gru_1) fc_1=Dense(3000,activation = "relu")(flatten_1) fc_2=Dense(1000,activation = "relu")(fc_1) fc_3=Dense(401,activation = "softmax")(fc_2) </code></pre> <p>The error:</p> <pre><code>Input 0 is incompatible with layer gru_3: expected ndim=3, found ndim=4 </code></pre> <p>As far as I know, relu_10 output dim is (none 33 33 256) and after timedistributed, the dimemsion should be 3D,because gru layer should have 3D input, my question is how can I make the dimension because 3D after timedistributed layer?</p> <p>And what is the function of the timedistributed ?</p>
2018-12-07 09:09:03.620000+00:00
2018-12-07 12:50:03.020000+00:00
2018-12-07 10:30:50.123000+00:00
python|tensorflow|keras
['https://arxiv.org/ftp/arxiv/papers/1805/1805.00312.pdf']
1
53,069,641
<p>So, it turns out I need to do standardization on the testing data for a good accuracy. To do it, I directly feed uint8 input images to the tf.image.per_image_standardization function. The function would convert the uint8 data to float32, and then do standardization (subtract mean, divide by std). You can find source code of the function here: <a href="https://github.com/tensorflow/tensorflow/blob/r1.11/tensorflow/python/ops/image_ops_impl.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/r1.11/tensorflow/python/ops/image_ops_impl.py</a></p> <p>Now, I have the standardized float32 input images. What I did is writing a quantization function to quantize the float32 images back to uint8. The math comes from this paper: <a href="https://arxiv.org/abs/1803.08607" rel="nofollow noreferrer">https://arxiv.org/abs/1803.08607</a></p> <p>Now, I have the <strong>standardized uint8</strong> input images, I then use tflite interpreter python API to test the model. It works as expected.</p>
2018-10-30 17:17:57.817000+00:00
2018-10-30 17:17:57.817000+00:00
null
null
53,017,722
<p>I've trained a simple CNN model on Cifar-10 in tensorflow with fake quantization (<a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize</a>). I then generated a .tflite file using toco. Now I want to use a python interpreter to test the tflite model. <br/></p> <p>Since I used tf.image.per_image_standardization to subtract mean and divide by variance during training. I need to do the same thing to the testing data right? But, the problem is, my model is already fully quantized by tflite, and it only takes uint8 data as inputs. To do image standardization, I need to convert my image to float32. So how do I convert it back to uint8, or is image standardization even necessary for the testing data in this case? Thanks.</p>
2018-10-27 00:17:21.720000+00:00
2018-10-30 17:17:57.817000+00:00
null
tensorflow|tensorflow-lite|quantization
['https://github.com/tensorflow/tensorflow/blob/r1.11/tensorflow/python/ops/image_ops_impl.py', 'https://arxiv.org/abs/1803.08607']
2
35,385,851
<p>The mapping is not unique. There are many other solutions to this question.</p> <p>For example, this mapping will also work</p> <blockquote> <p>u = x √(x² + y² - x²y²) / √(x² + y²)</p> <p>v = y √(x² + y² - x²y²) / √(x² + y²)</p> </blockquote> <p>where <strong>(u,v)</strong> are circular disc coordinates and <strong>(x,y)</strong> are square coordinates.</p> <p>A picture is worth a thousand words, so here are some images to illustrate the non-uniqueness of the mapping and its inverse.</p> <p><a href="https://i.stack.imgur.com/n0ILh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n0ILh.png" alt="circular Brady bunch" /></a></p> <hr /> <p><a href="https://i.stack.imgur.com/7f3r1.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/7f3r1.jpg" alt="squared Boston Celtics" /></a></p> <p><code>For a C++ implementation</code> of this other mapping, go to<br /> <a href="http://squircular.blogspot.com/2015/09/fg-squircle-mapping.html" rel="noreferrer">http://squircular.blogspot.com/2015/09/fg-squircle-mapping.html</a><br /> See <a href="http://squircular.blogspot.com" rel="noreferrer">http://squircular.blogspot.com</a> for more images of mapping results.</p> <p>See also <a href="http://arxiv.org/abs/1509.06344" rel="noreferrer">&quot;Analytical Methods for Squaring the Disc&quot;</a> for a paper discussing different mapping equations with proofs and derivations.</p>
2016-02-13 22:03:24.663000+00:00
2016-05-05 19:34:06.100000+00:00
2020-06-20 09:12:55.060000+00:00
null
1,621,831
<p>I'm developing an indie video game, and have been operating under the assumption that because the thumbstick on my controller has a circular range of motion, it returns "circular" coordinates; that is, Cartesian coordinates constrained to a circular area (of radius 1). In fact, the coordinates are "square"; e.g., the top-right thumbstick position registers as x=1,y=1. When I convert the coordinates from Cartesian to polar, the magnitude can exceed 1 - which has the effect that the player can move faster diagonally than they can vertically or horizontally.</p> <p>So, to clarify, I want to record the position of an analog thumbstick in terms of a direction and magnitude, where the magnitude is between 0 and 1. The thumbstick returns coordinates on a square plane, so simply converting the coordinates from Cartesian to polar is not sufficient. I think I need to convert the coordinate <em>space</em>, but that is pressing the limits of my monkey brain.</p>
2009-10-25 19:32:51.227000+00:00
2016-05-05 19:34:06.100000+00:00
null
math|coordinate-systems
['https://i.stack.imgur.com/n0ILh.png', 'https://i.stack.imgur.com/7f3r1.jpg', 'http://squircular.blogspot.com/2015/09/fg-squircle-mapping.html', 'http://squircular.blogspot.com', 'http://arxiv.org/abs/1509.06344']
5
49,235,044
<p>In my understanding, the authors of <a href="https://arxiv.org/abs/1409.0473" rel="nofollow noreferrer">original paper</a> avoid mixing base theory with implementation details. Thus, they defined attention/context size equal to the encoder hidden size (<a href="https://i.stack.imgur.com/i9E98.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i9E98.gif" alt="enter image description here"></a> for bi-directional LSTM) as below:</p> <p><a href="https://i.stack.imgur.com/EhBzy.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EhBzy.gif" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/p3Vrh.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p3Vrh.gif" alt="enter image description here"></a></p> <p>However, if encoder hidden size is too large, computing attention over long sequences may consume significant amount of time and memory.</p> <p>So, <a href="https://github.com/tensorflow/tensorflow/blob/283f03c825312efd3319cb37dc1a412288a536ec/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py" rel="nofollow noreferrer">tensorflow implementation</a> introduced additional dense attention_layer with tunable <strong>attention_size</strong> option (<strong>attention_layer_size</strong> in later versions) as below:</p> <pre><code> if attention_layer is not None: attention = attention_layer(array_ops.concat([cell_output, context], 1)) else: attention = context </code></pre> <p><strong>TL;DR;</strong> You can use <strong>attention_size</strong> option to reduce memory consumption of attention mechanism, when encoder hidden size is too large.</p>
2018-03-12 12:16:24.697000+00:00
2018-03-12 12:26:22.380000+00:00
2018-03-12 12:26:22.380000+00:00
null
48,741,081
<p>There has an argument <code>anttention_size</code> in<code>tf.contrib.seq2seq.AttentionWrapper</code>, the document says "The basic attention wrapper is tf.contrib.seq2seq.AttentionWrapper. This wrapper accepts an RNNCell instance, an instance of AttentionMechanism, and <strong>an attention depth parameter (attention_size)</strong>;", but what is an attention depth? In the Bahdanau and Luong's paper, I find no attention depth at all, and the source code of the attention mechanism I don't understand clearly. Who can tell me the mean of 'attention_size' and the principle, thank you!</p>
2018-02-12 07:04:44.323000+00:00
2018-03-12 12:26:22.380000+00:00
null
tensorflow|attention-model
['https://arxiv.org/abs/1409.0473', 'https://i.stack.imgur.com/i9E98.gif', 'https://i.stack.imgur.com/EhBzy.gif', 'https://i.stack.imgur.com/p3Vrh.gif', 'https://github.com/tensorflow/tensorflow/blob/283f03c825312efd3319cb37dc1a412288a536ec/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py']
5
70,570,506
<p>Please see the Rcpp vignette <a href="https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf" rel="nofollow noreferrer">Rcpp libraries</a> -- which is also this <a href="https://arxiv.org/abs/1911.06416" rel="nofollow noreferrer">arXiv paper</a>.</p> <p>There are many examples among the 2400+ CRAN packages using <code>Rcpp</code>. My hunch would probably be to look at what I contributed to the <code>nloptr</code> package -- even though that is a more complicated scheme where we allow use <em>either</em> a system library if present (could be the case with <code>fftw3</code> too) <em>or</em> downloand and build.</p> <p>Rcpp has been used <em>a lot</em> to build such glue. The most common, and simplest , approach is to look for <code>pkg-config</code> and query it for headers and libraries. Please give that a shot (with some looking around CRAN or GitHub for examples).</p> <p><em>Edit:</em> There is also an (old) <code>fftw3</code> package by Gabor <a href="https://github.com/MangoTheCat/fftw3" rel="nofollow noreferrer">at his previous employer's GitHub org</a> as well as another <a href="https://cran.r-project.org/package=fftwtools" rel="nofollow noreferrer">CRAN package <code>fftwtools</code></a> (which, if memory served, I helped once too but I don't recall now what for).</p>
2022-01-03 19:27:03.633000+00:00
2022-01-03 23:34:23.760000+00:00
2022-01-03 23:34:23.760000+00:00
null
70,570,263
<p>I have some C++ code that I intend to export into my r package using Rcpp. However, this code links to <a href="https://www.fftw.org/" rel="nofollow noreferrer">fftw3</a> via</p> <pre><code>#include &lt;fftw3.h&gt; </code></pre> <p>at the top of the file. When I try to compile this code, I unsurprisingly get the error</p> <pre><code>fatal error: 'fftw3.h' file not found </code></pre> <p>What is the proper way to link to this file so that it will be available upon compilation of my package? I know that a Makevars file can generally be used to link to system libraries but since this library is external I'm not sure what to do.</p> <p>Thanks,</p> <p>Eric.</p>
2022-01-03 18:59:45.370000+00:00
2022-01-03 23:34:23.760000+00:00
null
r|r-package
['https://cloud.r-project.org/web/packages/Rcpp/vignettes/Rcpp-libraries.pdf', 'https://arxiv.org/abs/1911.06416', 'https://github.com/MangoTheCat/fftw3', 'https://cran.r-project.org/package=fftwtools']
4
37,374,824
<p>Yes, updating only one weight at the time could result in decreasing error value every time but it's usually infeasible to do such updates in practical solutions using NN. Most of today's architectures usually have ~ 10^6 parameters so one epoch for every parameter could last enormously long. Moreover - because of nature of backpropagation - you usually have to compute loads of different derivatives in order to compute derivative with respect to a parameter given - so you will waste a lot of computations when using such approach.</p> <p>But the phenomenon which you mention has been noticed a long time ago and there are some ways in dealing with it. There are two most common issues connected with it:</p> <ol> <li><strong><em>Covariance shift:</em></strong> it's when error and weight updates of a layer given strongly depends on output from previous layer, so when you update it - the results in the next layer might be different. The most common way to deal with this problem right now is <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow"><strong>Batch normalization</strong></a>.</li> <li><strong><em>Nolinear function vs Linear Differentation:</em></strong> it's quite uncommon when you think about BP but derivative is a linear operator which might generate a lot of problems in gradient descent. The most countintuitive example is the fact that if you multiply your input by a constant then every derivative will also be multiplied by the same number. This may lead to a lot of problems but most of recent methods of learning do a great job in dealing with it. </li> </ol> <p>About BPTT I stronly recomend you Geoffrey Hinton course about ANN and especially this <a href="https://www.youtube.com/watch?v=gPdbTIEMQwY" rel="nofollow">video</a>.</p>
2016-05-22 13:13:02.813000+00:00
2016-05-22 13:13:02.813000+00:00
null
null
37,371,754
<p>I have gone through neural networks and have understood the derivation for back propagation almost perfectly(finally!). However, I had a small doubt. We are updating all the weights simultaneously, so what is the guarantee that they lead to a smaller cost. If the weights are updated one by one, it would definitely lead to a lesser cost and it would be similar to linear regression. But if you update all the weights simultaneously, might we not cross the minima?</p> <p>Also, do we update the biases like we update the weights after each forward propagation and back propagation of each test case?</p> <p>Lastly, I have started reading on RNN's. What are some good resources to understand BPTT in RNN's? </p>
2016-05-22 07:39:37.807000+00:00
2016-05-22 13:13:02.813000+00:00
null
neural-network|recurrent-neural-network
['https://arxiv.org/pdf/1502.03167.pdf', 'https://www.youtube.com/watch?v=gPdbTIEMQwY']
2
33,821,462
<p>You might want to try <a href="http://rdrpostagger.sourceforge.net/" rel="nofollow">RDRPOSTagger</a>: a robust, easy-to-use and language-independent toolkit for POS and morphological tagging.</p> <p>(Programming language: Python &amp; Java)</p> <p>RDRPOSTagger obtains fast performance in both learning and tagging process. In addition, RDRPOSTagger achieves a very competitive accuracy in comparison to the state-of-the-art results. See experimental results including performance speed and tagging accuracy in <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this paper</a>.</p> <p>RDRPOSTagger now supports pre-trained POS and morphological tagging models for 13 languages, including Thai and Vietnamese.</p>
2015-11-20 07:47:13.337000+00:00
2015-11-21 06:45:00.780000+00:00
2015-11-21 06:45:00.780000+00:00
null
5,280,572
<p>Can someone recommend an open source POS tagger for Korean, Indonesian, Thai and Vietnamese?</p> <p>That I can use to tag the corpus data that I currently have. (e.g. <a href="http://nlp.stanford.edu/software/index.shtml" rel="noreferrer">the stanford-postagger</a>)</p> <p>If you are a dev and care to share and let me test out the POS tagger, I don't mind either.</p> <p>With some modifications of the output, I've POS tagged the Vietnamese data with <a href="http://sourceforge.net/projects/jvntextpro/" rel="noreferrer">jvntextpro</a></p> <p>But I'd still like more input on Korean, Indonesian and Thai POS tagging.</p>
2011-03-12 04:31:26.813000+00:00
2015-11-21 06:45:00.780000+00:00
2012-11-20 06:27:06.223000+00:00
nlp|nltk|cjk|pos-tagger|thai
['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021']
2
60,328,116
<p>One way that has worked for me in the past to handle highly imbalanced datasets is Synthetic Minority Oversampling Technique (SMOTE). Here is the paper for better understanding:</p> <p><a href="https://arxiv.org/pdf/1106.1813.pdf" rel="nofollow noreferrer">SMOTE Paper</a></p> <p>This works by synthetically oversampling the minority class or classes for that matter. To quote the paper:</p> <blockquote> <p>The minority class is over-sampled by taking each minority class sample and introducing synthetic examples along the line segments joining any/all of the k minority class nearest neighbors. Depending upon the amount of over-sampling required, neighbors from the k nearest neighbors are randomly chosen.</p> </blockquote> <p>This then will move closer towards balancing out your dataset. There is an implementation of SMOTE in the <a href="https://imbalanced-learn.readthedocs.io/en/stable/" rel="nofollow noreferrer">imblearn</a> package in python. </p> <p>Here is a good read about <a href="https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/over-sampling/plot_comparison_over_sampling.html#more-advanced-over-sampling-using-adasyn-and-smote" rel="nofollow noreferrer">different oversampling algorithms</a>. It includes oversampling using <a href="https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.ADASYN.html" rel="nofollow noreferrer">ADASYN</a> as well as <a href="https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html" rel="nofollow noreferrer">SMOTE</a>.</p> <p>I hope this helps.</p>
2020-02-20 20:36:18.730000+00:00
2020-02-20 20:36:18.730000+00:00
null
null
60,327,063
<p>I'm using Auto-Sklearn and have a dataset with 42 classes that are heavily imbalanced. What is the best way to handle this imbalance? As far as I know, two approaches to handle imbalanced data within machine learning exist. Either using a resampling mechanism such as over- or under-sampling (or a combination of both) or to solve it on an algorithmic-level by choosing an inductive bias that would require in-depth knowledge about the algorithms used within Auto-Sklearn. I'm not quite sure on how to handle this problem. Is it anyhow possible to solve the imbalance directly within Auto-Sklearn or do I need to use resampling strategies as offered by e.g. imbalanced-learn? Which evaluation metric should be used after the models have been computed? The roc_auc_score for multiple classes is available since sklearn==0.22.1. However, Auto-Sklearn only supports sklearn up to version 0.21.3. Thanks in advance! </p>
2020-02-20 19:15:14.227000+00:00
2020-02-22 13:07:23.960000+00:00
null
python|machine-learning|scikit-learn|multiclass-classification
['https://arxiv.org/pdf/1106.1813.pdf', 'https://imbalanced-learn.readthedocs.io/en/stable/', 'https://imbalanced-learn.readthedocs.io/en/stable/auto_examples/over-sampling/plot_comparison_over_sampling.html#more-advanced-over-sampling-using-adasyn-and-smote', 'https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.ADASYN.html', 'https://imbalanced-learn.readthedocs.io/en/stable/generated/imblearn.over_sampling.SMOTE.html']
5
41,177,133
<p>Sadly, I cannot find a more comprehensive documentation. Below I collect all related resources:</p> <ul> <li>How-to : <a href="https://github.com/efeiefei/tensorflow_documents_zh/blob/master/get_started/embedding_viz.md" rel="noreferrer">https://github.com/efeiefei/tensorflow_documents_zh/blob/master/get_started/embedding_viz.md</a></li> <li>Google Research Blog: <a href="https://research.googleblog.com/2016/12/open-sourcing-embedding-projector-tool.html" rel="noreferrer">announcement</a> and <a href="https://youtu.be/wvsE8jm1GzE" rel="noreferrer">animation</a> </li> <li>Paper : <a href="https://arxiv.org/pdf/1611.05469v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1611.05469v1.pdf</a></li> <li>Source : <a href="https://github.com/tensorflow/embedding-projector-standalone" rel="noreferrer">https://github.com/tensorflow/embedding-projector-standalone</a></li> <li>2017 TF Dev Summit <a href="https://www.youtube.com/watch?v=eBbEDRsCmv4&amp;t=1105s" rel="noreferrer">tutorial</a> and <a href="https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial" rel="noreferrer">code</a></li> <li>Issue <a href="https://github.com/tensorflow/tensorflow/issues/6322" rel="noreferrer">#6322</a> has some pointers and examples</li> </ul> <p>PS: Thanks for upvoting me. Now I can post all the links.</p> <h2>Update 2019-08</h2> <p>Now you can use Embedding Projector easily in Colab with PyTorch's SummaryWriter</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import tensorflow as tf import tensorboard as tb tf.io.gfile = tb.compat.tensorflow_stub.io.gfile from torch.utils.tensorboard import SummaryWriter vectors = np.array([[0,0,1], [0,1,0], [1,0,0], [1,1,1]]) metadata = ['001', '010', '100', '111'] # labels writer = SummaryWriter() writer.add_embedding(vectors, metadata) writer.close() %load_ext tensorboard %tensorboard --logdir=runs </code></pre> <h2>Update 2020-02</h2> <p>The %tensorboard magic now works properly again.</p>
2016-12-16 04:07:38.017000+00:00
2020-02-24 07:00:46.873000+00:00
2020-02-24 07:00:46.873000+00:00
null
40,849,116
<p>How do I use the Embedding Projector included in Tensorboard?</p> <p>I can't find any documentation for it. There are some references to it <a href="https://www.tensorflow.org/versions/master/how_tos/embedding_viz/index.html" rel="noreferrer">here</a>, but there's no step-by-step example/tutorial on how to use it.</p>
2016-11-28 16:28:35.660000+00:00
2020-02-24 07:00:46.873000+00:00
2019-11-12 20:50:52.260000+00:00
tensorboard
['https://github.com/efeiefei/tensorflow_documents_zh/blob/master/get_started/embedding_viz.md', 'https://research.googleblog.com/2016/12/open-sourcing-embedding-projector-tool.html', 'https://youtu.be/wvsE8jm1GzE', 'https://arxiv.org/pdf/1611.05469v1.pdf', 'https://github.com/tensorflow/embedding-projector-standalone', 'https://www.youtube.com/watch?v=eBbEDRsCmv4&t=1105s', 'https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial', 'https://github.com/tensorflow/tensorflow/issues/6322']
8
17,122,326
<p>OpenCL and CUDA are equally fast if they are tweaked correctly for the target architecture. However, tweaking may negatively impact portability.</p> <p>Links:</p> <ul> <li><a href="http://arxiv.org/ftp/arxiv/papers/1005/1005.2581.pdf" rel="nofollow">http://arxiv.org/ftp/arxiv/papers/1005/1005.2581.pdf</a></li> <li><a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;arnumber=6047190&amp;tag=1" rel="nofollow">http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&amp;arnumber=6047190&amp;tag=1</a></li> </ul>
2013-06-15 09:52:58.280000+00:00
2013-06-15 09:52:58.280000+00:00
null
null
17,122,288
<p>I coded a program to create a color lookup table. I did it in CUDA and OpenCL, from my point of view both programs are pretty much the same, i.e. use the same amount of constant memory, global memory, same loops and branching code, etc.</p> <p>I measure the running time and CUDA performed slightly better than OpenCL. My question is if using CUDA+NvidiaGPU is faster than OpenCL+NvidiaGPU because CUDA is the native way of programming such GPU?</p> <p>Could you share some links to info related on this topic?</p>
2013-06-15 09:48:40.300000+00:00
2013-06-15 09:52:58.280000+00:00
null
cuda|opencl|gpgpu
['http://arxiv.org/ftp/arxiv/papers/1005/1005.2581.pdf', 'http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6047190&tag=1']
2
69,799,418
<p>This question is better suited to <a href="https://stats.stackexchange.com/">Cross Validated</a>. Based on my understanding from reading the 2014 GAN paper and other resources, the discriminator is kept at near optimal level such that it can provide better guidance to generator so it could improve. This is by first optimizing discriminator for some steps and then optimizing generator for one step.</p> <p>Discriminator tries to maximize probability of assigning correct labels to both data from training example and the samples generated from generator. Simultaneously generator tries to minimize <code>log(1 - D(G(z)))</code> or maximize <code>log(D(G(z)))</code>.</p> <p>At start both generator and discriminator are weak. Discriminator will mostly classify wrong and generated samples will be far from training distribution.</p> <p>If generator can always completely fool discriminator then it will label all generated data as real. If the discriminator can perfectly distinguish between generated and real data then the generator will not know how to improve.</p> <p>In practice, for a mini batch discriminator <code>D</code> is optimized (Section 3 of paper) for <code>k</code> steps then generator <code>G</code> is optimized for 1 step.</p> <p><a href="https://i.stack.imgur.com/AMHeA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMHeA.png" alt="enter image description here" /></a></p> <p>If generator and discriminator are trained at the same time then generator won't get proper feedback from discriminator to generate better results and discriminator will misclassify generated, real data. I am not sure if it will converge and if it did it should take a lot of time.</p> <p>If the discriminator is slightly better then it can catch some generated fake results, but not all. This in turn will allow generator improve its generation ability using feedback from discriminator to fool discriminator next time. Again in next iterations discriminator will keep improving and generator will also have to get better to fool discriminator. This result in gradual improvement of generated data that close matches training data.</p> <p>Corrections for any mistakes welcome.</p> <h4>References</h4> <ul> <li>GAN 2014 paper section 3, <a href="https://arxiv.org/pdf/1406.2661.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1406.2661.pdf</a>.</li> <li>Coursera GAN training, <a href="https://www.coursera.org/lecture/build-basic-generative-adversarial-networks-gans/putting-it-all-together-gIAJ0" rel="nofollow noreferrer">https://www.coursera.org/lecture/build-basic-generative-adversarial-networks-gans/putting-it-all-together-gIAJ0</a>.</li> </ul>
2021-11-01 15:54:19.937000+00:00
2021-11-01 15:54:19.937000+00:00
null
null
69,771,639
<p>I recent start study GAN, but when I read about GAN's training algorithm it said the first step is train discriminator and then train generator in loop while they're not balance i'm confused about can't we train both at the same time??</p>
2021-10-29 15:48:11.327000+00:00
2021-11-01 15:54:19.937000+00:00
null
generative-adversarial-network
['https://stats.stackexchange.com/', 'https://i.stack.imgur.com/AMHeA.png', 'https://arxiv.org/pdf/1406.2661.pdf', 'https://www.coursera.org/lecture/build-basic-generative-adversarial-networks-gans/putting-it-all-together-gIAJ0']
4
68,784,792
<p>I have this question once, and it is what I've found:</p> <p>This passage from a report about it by a prominent pioneer of SEM pretty much sums it up:</p> <p>&quot;This misunderstanding probably stems from classical exploratory factor analysis where factor loadings are correlations if a correlation matrix is analyzed and the factors are standardized and uncorrelated (orthogonal). However, if the factors are correlated (oblique), the factor loadings are regression coefficients and not correlations and as such they can be larger than one in magnitude.&quot;</p> <p>The absolute value of factor loadings can be greater than 1 (please read this for more details: <a href="https://stats.stackexchange.com/questions/266304/in-factor-analysis-or-in-pca-what-does-it-mean-a-factor-loading-greater-than">https://stats.stackexchange.com/questions/266304/in-factor-analysis-or-in-pca-what-does-it-mean-a-factor-loading-greater-than</a>)</p> <p>You can have a look about EFA, CFA in this doc <a href="https://arxiv.org/abs/1905.05598" rel="nofollow noreferrer">https://arxiv.org/abs/1905.05598</a></p>
2021-08-14 15:39:49.767000+00:00
2021-08-14 15:45:29.353000+00:00
2021-08-14 15:45:29.353000+00:00
null
63,614,842
<p>I made factor analysis using ConfirmatoryFactorAnalyzer from factor_analyzer package. As far as I understand SEM, the factor loadings should be the Pearson's coefficients of latent variables and measured variables, but one of them is equal to -1.17, so it cannot be correlation coefficient.</p> <p>Does it mean something else in case of this package? Should I standarize it somehow (but my data is standarized)? Docs don't really help:</p> <blockquote> <p>loadings_: The factor loadings matrix.</p> </blockquote> <p>Here is my code:</p> <pre><code>def sem_analysis(data, group1, group2): scaler = StandardScaler() scaled_data = pd.DataFrame(scaler.fit_transform(data), columns=data.columns) required_data = scaled_data[group1 + group2] model_dict = {&quot;F1&quot;: group1, &quot;F2&quot;: group2} model_spec = ModelSpecificationParser.parse_model_specification_from_dict(required_data, model_dict) cfa = ConfirmatoryFactorAnalyzer(model_spec, disp=False) cfa.fit(required_data.values) return cfa.loadings_ </code></pre> <p>And the result I get on randomly generated data:</p> <pre><code>[[ 0.81664434 0. ] [ 0.76591388 0. ] [-0.84197706 0. ] [ 0. -0.27572329] [ 0. -1.17491134] [ 0. 0.39020765]] </code></pre>
2020-08-27 11:17:47.927000+00:00
2021-08-14 15:45:29.353000+00:00
2020-08-28 09:49:46.637000+00:00
python|statistics|data-science|data-analysis
['https://stats.stackexchange.com/questions/266304/in-factor-analysis-or-in-pca-what-does-it-mean-a-factor-loading-greater-than', 'https://arxiv.org/abs/1905.05598']
2
58,600,604
<p>In <a href="https://arxiv.org/pdf/1612.03144.pdf" rel="nofollow noreferrer">Feature Pyramid Networks for Object Detection</a>, Faster RCNN shows different mAP on object of different size. The model has higher mAP on large objects than on small objects. In <a href="http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf" rel="nofollow noreferrer">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</a>, faster RCNN resizes input images such that their shorter side is 600 pixels. Thus, in my option, relative size of objects in images does matter in detection. Cropping a large image and use the smaller image as input may facilitate the detection of small objects in the raw image for small objects become relatively large objects in the new image. </p> <p><a href="https://i.stack.imgur.com/4xReZ.png" rel="nofollow noreferrer">FPN in a basic Faster R-CNN system has different performance on small, middle and large objects.</a></p> <ol> <li><a href="https://github.com/rbgirshick/py-faster-rcnn/issues/304" rel="nofollow noreferrer">Discussion on GitHub</a> </li> <li><a href="https://github.com/rbgirshick/py-faster-rcnn/issues/371" rel="nofollow noreferrer">Another discussion on GitHub</a></li> </ol>
2019-10-29 02:06:38.777000+00:00
2019-10-29 03:30:28.213000+00:00
2019-10-29 03:30:28.213000+00:00
null
58,018,623
<p>I have just done some transfer learning with a faster-rcnn using tensorflow object detection api. I am in tensorflow 1.14, the backbone network is faster_rcnn_resnet101_coco. Do frozen networks resize images fed to them when making predictions? </p> <p>I ask because when I feed the model an image that is much larger than those I trained on, it doesn't recognize any of the objects. When I crop the image down to 1200x1200, the objects are all identical, but it works great. </p> <p>Does the model include image size constraints? Should I be making predictions using similar dimensions to those in the config file, even though the objects are the same size in the 3000x3000 image?</p> <p>In the config file for training, I constrain the input images:</p> <pre><code>image_resizer { keep_aspect_ratio_resizer { min_dimension: 200 max_dimension: 1200 } } </code></pre> <p>Does this mean that in the trained model, that I am now using, if I feed it an image larger than 1200x1200, it will scale it down? Here is how I do the prediction in the loaded frozen model:</p> <pre><code>with model.as_default(): with tf.Session(graph=model) as sess: imageTensor = model.get_tensor_by_name("image_tensor:0") boxesTensor = model.get_tensor_by_name("detection_boxes:0") scoresTensor = model.get_tensor_by_name("detection_scores:0") classesTensor = model.get_tensor_by_name("detection_classes:0") numDetections = model.get_tensor_by_name("num_detections:0") # Make prediction (boxes, scores, labels, N) = sess.run([boxesTensor, scoresTensor, classesTensor, numDetections], feed_dict = {imageTensor: image}) </code></pre> <p>Related: <a href="https://stackoverflow.com/questions/57855824/training-image-size-faster-rcnn">Training Image Size Faster-RCNN</a></p> <p>Also, this post makes me think it should handle any input size, but it clearly doesn't handle them the same, so I'm confused: <a href="https://stackoverflow.com/questions/53387059/faster-rcnn-inception-v2-input-size?rq=1">Faster RCNN + inception v2 input size</a></p>
2019-09-19 20:36:01.403000+00:00
2022-09-16 06:07:47.343000+00:00
2019-09-20 02:55:39.927000+00:00
python|tensorflow|object-detection-api
['https://arxiv.org/pdf/1612.03144.pdf', 'http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf', 'https://i.stack.imgur.com/4xReZ.png', 'https://github.com/rbgirshick/py-faster-rcnn/issues/304', 'https://github.com/rbgirshick/py-faster-rcnn/issues/371']
5
68,168,440
<p><strong>Question 1</strong></p> <p>Yolov5 performs object detection. The generated bounding boxes are passed to Deep Sort which tracks the objects. Tracking is done by an association based on two things:</p> <ol> <li>Mahalanobis distance between predicted Kalman states and newly arrived measurements</li> <li>Appearance descriptor</li> </ol> <p>In combination, both metrics complement each other by serving different aspects of the assignment problem. The Mahalonobis distance is suitable when the motion uncertainty is low, but when there is occlusion and/or unaccounted frame skipping the Mahalanobis distance is rather useless. Therefore, the idea is to use a visual metric to complement the motion metric. In order to do this an appearance descriptor is calculated for each bounding box detection and are compared to a &quot;gallery&quot; of appearance descriptors, this is particularly useful to recover identities after long-term occlusions or rapid displacements.</p> <p>So, yes. Both Yolo and DeepSort are used to generate the txt file.</p> <p><strong>Question 2.</strong></p> <p>In the original <a href="https://arxiv.org/pdf/1703.07402.pdf" rel="nofollow noreferrer">Deep Sort paper</a> it is stated that the appearance descriptors are generated by a CNN trained on MARS as stated. <a href="https://github.com/ZQPei/deep_sort_pytorch#training-the-re-id-model" rel="nofollow noreferrer">Some implementations</a> of Deep Sort also uses the Market1501 dataset for this purpose. Both of them only contains persons. They choose these datasets because they focus on the MOT challenge dataset which only contains persons.</p> <p>So, ideally you would train your own appearance descriptor on the class you want to track.</p>
2021-06-28 18:31:22.180000+00:00
2021-06-28 18:31:22.180000+00:00
null
null
65,518,829
<p>First of all, I am practicing based on 'https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch'.</p> <p>Question 1</p> <p>Is the created txt file a file to which both yolo algorithm and deepsort algorithm are applied?</p> <p>Question 2</p> <p>I trained the detector yolo to apply it on custom data. Does deepsort also need to train custom data afterwards?</p>
2020-12-31 09:10:56.883000+00:00
2021-07-19 19:13:07.270000+00:00
null
image|object-detection|yolo|yolov5
['https://arxiv.org/pdf/1703.07402.pdf', 'https://github.com/ZQPei/deep_sort_pytorch#training-the-re-id-model']
2
62,919,841
<p>Let me talk about random integer generating algorithms that are &quot;optimal&quot; in terms of the number of random bits it uses on average. In the rest of this post, we will assume we have a &quot;true&quot; random generator that can produce unbiased and independent random bits.</p> <p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. They also gave lower bounds on the number of bits a given algorithm will need on average for this task. In this case, an <em>optimal</em> algorithm to generate integers in <code>[0, n)</code> uniformly, will need at most <code>log2(n) + 2</code> bits on average. There are many examples of <em>optimal</em> algorithms in this sense. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and another is perhaps the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004. On the other hand, all the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are not optimal, since they rely on generating blocks of random bits at a time.</p> <p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p> <ul> <li>have an &quot;infinite&quot; depth, or</li> <li>include &quot;rejection&quot; leaves at the end of the tree,</li> </ul> <p>and in either case, the algorithm will run forever in the worst case, even if it uses very few random bits on average. (On the other hand, when n is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses &quot;rejection&quot; events to ensure it's unbiased; see the comment in the code below.</p> <p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both.</strong> In particular, there is no way to &quot;fix&quot; the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (e.g., <code>mt_rand() % n</code>) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p> <p>Note that we assumed we had a random bit generator. In the case of your answer that can be achieved with <code>ReadBitsAndConvertToInteger(1)</code>.</p> <h3>Fast Dice Roller Implementation</h3> <p>The following is a JavaScript implementation of the Fast Dice Roller. Note that it uses rejection events and a loop to ensure it's unbiased. <code>ReadBitsAndConvertToInteger(1)</code> is a random bit generator (e.g., as used in your answer).</p> <pre><code>function randomInt(minInclusive, maxExclusive) { var maxInclusive = (maxExclusive - minInclusive) - 1 var x = 1 var y = 0 while(true) { x = x * 2 var randomBit = ReadBitsAndConvertToInteger(1) y = y * 2 + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - 1 y = y - maxInclusive - 1 } } } </code></pre>
2020-07-15 16:47:46.413000+00:00
2020-07-29 02:40:21.323000+00:00
2020-07-29 02:40:21.323000+00:00
null
10,896,997
<p>I have a project which uses php's mt_rand() to generate different random integers but I have recently gained access to a stream of real random bits. I am having trouble figuring out how to create a function similar to mt_rand(), where I can get a random integer between two values, from my stream of bits. How can I achieve this?</p>
2012-06-05 12:04:49.700000+00:00
2020-07-29 02:40:21.323000+00:00
null
php
['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://www.pcg-random.org/posts/bounded-rands.html']
3
50,161,875
<p>I am the author of the R package <b>optimParallel</b>. It provides parallel versions of the gradient-based optimization methods of <code>optim()</code>. The main function of the package is <code>optimParallel()</code>, which has the same usage and output as <code>optim()</code>. Using <code>optimParallel()</code> can significantly reduce optimization times as illustrated in the following figure (<code>p</code> is the number of paramters).</p> <p><a href="https://i.stack.imgur.com/XIjk9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XIjk9.png" alt="enter image description here"></a></p> <p>See <a href="https://cran.r-project.org/package=optimParallel" rel="nofollow noreferrer">https://cran.r-project.org/package=optimParallel</a> and <a href="http://arxiv.org/abs/1804.11058" rel="nofollow noreferrer">http://arxiv.org/abs/1804.11058</a> for more information. </p>
2018-05-03 18:39:53.063000+00:00
2018-05-03 18:39:53.063000+00:00
null
null
15,397,390
<p>I'm running R on linux box that has 8 multicore processors, and have an optimization problem I'd like to speed up by parallelizing the optimization routine itself. Importantly, this problem involves (1) multiple parameters, and (2) <em>inherently slow</em> model runs. A fairly common problem! </p> <p>Anyone know of a <em>parallelized optimizer</em> for such occasions?</p> <p>More specifically, solvers like <code>nlm()</code> run multiple model evaluations (two per parameter value) each time the algorithm takes a step in parameter space, so parallelizing that instance of multiple model runs would greatly speed things up in these situations when more than a few parameter values are being fit. </p> <p>It seems like code that makes use of the package <code>parallel</code> could be written in a way that the user would have to do <em>minimal</em> code modification to move from using <code>nlm()</code> or <code>optim()</code> to this parallelized optimization routine. That is, it seems one could rewrite these routines basically with no changes, except that the step of calling the model multiple times, as is common in gradient-based methods, would be done in parallel.</p> <p>Ideally, something like nlmPara() would take code that looks like</p> <pre><code>fit &lt;- nlm(MyObjFunc, params0); </code></pre> <p>and require only minor modifications, e.g., </p> <pre><code>fit &lt;- nlmPara(MyObjFunc, params0, ncores=6); </code></pre> <p>Thoughts/suggestions? </p> <p>PS: I've taken steps to speed up those model runs, but they're slow for a variety of reasons (i.e. I don't need advice on speeding up the model runs! ;-) ). </p>
2013-03-13 22:04:16.430000+00:00
2018-05-03 18:39:53.063000+00:00
null
r|optimization|parallel-processing
['https://i.stack.imgur.com/XIjk9.png', 'https://cran.r-project.org/package=optimParallel', 'http://arxiv.org/abs/1804.11058']
3
37,901,831
<p>I also tried getting research articles on this but haven't found any. I would suggest you to try using the aspect based sentiment analysis algorithms. The similarity i found is there we recognize aspects of a single entity in a sentence and then find the sentiment of each aspect.Similarly we can train our model using the same algorithm which can detect the entities as it does for aspects and find the sentiment of such entities. I didn't try this but I am going to.Let me know if this worked or not.Also there are various ways to do this. The following are the links for few articles.</p> <p><a href="http://arxiv.org/pdf/1605.08900v1.pdf" rel="nofollow">http://arxiv.org/pdf/1605.08900v1.pdf</a> <a href="https://cs224d.stanford.edu/reports/MarxElliot.pdf" rel="nofollow">https://cs224d.stanford.edu/reports/MarxElliot.pdf</a></p>
2016-06-18 21:32:28.517000+00:00
2016-06-18 21:32:28.517000+00:00
null
null
11,141,194
<p>I've been working on document level sentiment analysis since past 1 year. <em>Document level sentiment analysis</em> provides the sentiment of the complete document. For example - The text "<em>Nokia is good but vodafone sucks big time</em>" would have a negative polarity associated with it as it would be agnostic to the entities Nokia and Vodafone. <em>How would it be possible to get entity level sentiment, like positive for Nokia but negative for Vodafone</em> ? Are there any research papers providing a solution to such problems ?</p>
2012-06-21 15:11:02.210000+00:00
2018-04-19 14:32:30.293000+00:00
2018-04-19 04:33:09.213000+00:00
nlp|sentiment-analysis|named-entity-recognition
['http://arxiv.org/pdf/1605.08900v1.pdf', 'https://cs224d.stanford.edu/reports/MarxElliot.pdf']
2
48,688,732
<p>I believe the information you are looking for can be found in this link, about NLP (Natural Language Processing) and using it in a CNN (Convolutional Neural Networks) </p> <p><a href="http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/" rel="nofollow noreferrer">http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/</a></p> <p>Its also worth noting that CNN's are made specifically for 'vision' or image parsing. And in most cases a DNN(Deep Neural Network) is needed for such a complex requirement. </p> <p>DNN/NLP reading can be found here: <a href="https://arxiv.org/pdf/1703.03091.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1703.03091.pdf</a></p> <p><strong>TL;DR</strong></p> <p>There is no specific algorithm, but rather a subset of multiple algorithms that can be used to infer the information above. Look into Microsoft's white papers on language research. </p>
2018-02-08 15:11:47.113000+00:00
2018-02-08 15:11:47.113000+00:00
null
null
48,688,351
<p>How can a program learn to map pronouns <em>correctly</em> to something else in the text? </p> <p>For example, in text "Lisa beats Jenny. She is cruel.", I would like "She" to map to "Lisa".</p> <p>Is there a known name for such algorithm? If yes, what is it?</p>
2018-02-08 14:55:01.183000+00:00
2022-03-02 10:53:47.603000+00:00
2018-02-08 18:11:03.797000+00:00
machine-learning|nlp|artificial-intelligence|nltk
['http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/', 'https://arxiv.org/pdf/1703.03091.pdf']
2
54,773,912
<h1>Solution Overview</h1> <p>Okay, I would approach the problem from multiple directions. There are some great suggestions here and if I were you I would use an ensemble of those approaches (majority voting, predicting label which is agreed upon by more than 50% of classifiers in your binary case).</p> <p><strong>I'm thinking about following approaches:</strong></p> <ul> <li><strong>Active learning</strong> (example approach provided by me below)</li> <li><a href="https://stackoverflow.com/a/54757134/10886420"><strong>MediaWiki backlinks</strong></a> provided as an answer by <a href="https://stackoverflow.com/users/10317656/tavoglc">@TavoGC</a></li> <li><strong>SPARQL</strong> ancestral categories provided as a comment to your question by <a href="https://stackoverflow.com/users/7879193/stanislav-kralin">@Stanislav Kralin</a> and/or <a href="https://stackoverflow.com/a/54781366/10886420">parent categories</a> provided by <a href="https://stackoverflow.com/users/10554298/meena-nagarajan">@Meena Nagarajan</a> (those two could be an ensemble on their own based on their differences, but for that you would have to contact both creators and compare their results).</li> </ul> <p>This way 2 out of three would have to agree a certain concept is a medical one, which minimizes chance of an error further.</p> <p>While we're at it I would argue <strong>against</strong> approach presented by <a href="https://stackoverflow.com/users/10953776/anand-v-singh">@ananand_v.singh</a> in <a href="https://stackoverflow.com/a/54721431/10886420">this answer</a>, because:</p> <ul> <li>distance metric <strong>should not</strong> be euclidean, cosine similarity is much better metric (used by, e.g. <a href="https://spacy.io/" rel="noreferrer">spaCy</a>) as it does not take into account magnitude of the vectors (and it shouldn't, that's how word2vec or GloVe were trained)</li> <li>many artificial clusters would be created if I understood correctly, while we only need two: medicine and non-medicine one. Furthermore, centroid of medicine <strong>is not</strong> centered on the medicine itself. This poses additional problems, say centroid is moved far away from the medicine and other words like, say, <code>computer</code> or <code>human</code> (or any other not-fitting in your opinion into medicine) might get into the cluster.</li> <li>it's hard to evaluate results, even more so, the matter is strictly subjective. Furthermore word vectors are hard to visualize and understand (casting them into lower dimensions [2D/3D] using PCA/TSNE/similar for so many words, would give us totally non-sensical results [yeah, I have tried to do it, PCA gets around 5% explained variance for your longer dataset, really, really low]).</li> </ul> <p>Based on the problems highlighted above I have come up with solution using <a href="https://en.wikipedia.org/wiki/Active_learning_(machine_learning)" rel="noreferrer">active learning</a>, which is pretty forgotten approach to such problems.</p> <h1>Active Learning approach</h1> <p>In this subset of machine learning, when we have a hard time coming up with an exact algorithm (like what does it mean for a term to be a part of <code>medical</code> category), we ask human "expert" (doesn't actually have to be expert) to provide some answers.</p> <h2>Knowledge encoding</h2> <p>As <a href="https://stackoverflow.com/users/10953776/anand-v-singh">anand_v.singh</a> pointed out, word vectors are one of the most promising approach and I will use it here as well (differently though, and IMO in a much cleaner and easier fashion).</p> <p>I'm not going to repeat his points in my answer, so I will add my two cents:</p> <ul> <li><strong>Do not</strong> use contextualized word-embeddings as currently available state of the art (e.g. <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="noreferrer">BERT</a>)</li> <li>Check how many of your concepts have <strong>no representation</strong> (e.g. is represented as a vector of zeros). It should be checked (and is checked in my code,, there will be further discussion when the time comes) and you may use the embedding which has most of them present. </li> </ul> <h3>Measuring similarity using <em>spaCy</em></h3> <p>This class measures similarity between <code>medicine</code> encoded as spaCy's GloVe word vector and every other concept.</p> <pre><code>class Similarity: def __init__(self, centroid, nlp, n_threads: int, batch_size: int): # In our case it will be medicine self.centroid = centroid # spaCy's Language model (english), which will be used to return similarity to # centroid of each concept self.nlp = nlp self.n_threads: int = n_threads self.batch_size: int = batch_size self.missing: typing.List[int] = [] def __call__(self, concepts): concepts_similarity = [] # nlp.pipe is faster for many documents and can work in parallel (not blocked by GIL) for i, concept in enumerate( self.nlp.pipe( concepts, n_threads=self.n_threads, batch_size=self.batch_size ) ): if concept.has_vector: concepts_similarity.append(self.centroid.similarity(concept)) else: # If document has no vector, it's assumed to be totally dissimilar to centroid concepts_similarity.append(-1) self.missing.append(i) return np.array(concepts_similarity) </code></pre> <p>This code will return a number for each concept measuring how similar it is to centroid. Furthermore, it records indices of concepts missing their representation. It might be called like this:</p> <pre><code>import json import typing import numpy as np import spacy nlp = spacy.load("en_vectors_web_lg") centroid = nlp("medicine") concepts = json.load(open("concepts_new.txt")) concepts_similarity = Similarity(centroid, nlp, n_threads=-1, batch_size=4096)( concepts ) </code></pre> <p>You may substitute you data in place of <code>new_concepts.json</code>.</p> <p>Look at <a href="https://spacy.io/usage/models" rel="noreferrer">spacy.load</a> and notice I have used <a href="https://spacy.io/models/en#en_vectors_web_lg" rel="noreferrer"><code>en_vectors_web_lg</code></a>. It consists of <strong>685.000 unique word vectors</strong> (which is a lot), and may work out of the box for your case. You have to download it separately after installing spaCy, more info provided in the links above.</p> <p><strong>Additionally</strong> you may want to use <strong>multiple centroid words</strong>, e.g. add words like <code>disease</code> or <code>health</code> and average their word vectors. I'm not sure whether that would affect positively your case though.</p> <p><strong>Other possibility</strong> might be to use multiple centroids and calculate similiarity between each concept and multiple of centroids. We may have a few thresholds in such case, this is likely to remove some <a href="https://en.wikipedia.org/wiki/False_positives_and_false_negatives" rel="noreferrer">false positives</a>, but may miss some terms which one could consider to be similar to <code>medicine</code>. Furthermore it would complicate the case much more, but if your results are unsatisfactory you should consider two options above (and only if those are, don't jump into this approach without previous thought).</p> <p>Now, we have a rough measure of concept's similarity. But <strong>what does it mean</strong> that a certain concept has 0.1 positive similarity to medicine? Is it a concept one should classify as medical? Or maybe that's too far away already?</p> <h2>Asking expert</h2> <p>To get a threshold (below it terms will be considered non medical), it's easiest to ask a human to classify some of the concepts for us (and that's what active learning is about). Yeah, I know it's a really simple form of active learning, but I would consider it such anyway.</p> <p>I have written a class with <code>sklearn-like</code> interface asking human to classify concepts until optimal threshold (or maximum number of iterations) is reached.</p> <pre><code>class ActiveLearner: def __init__( self, concepts, concepts_similarity, max_steps: int, samples: int, step: float = 0.05, change_multiplier: float = 0.7, ): sorting_indices = np.argsort(-concepts_similarity) self.concepts = concepts[sorting_indices] self.concepts_similarity = concepts_similarity[sorting_indices] self.max_steps: int = max_steps self.samples: int = samples self.step: float = step self.change_multiplier: float = change_multiplier # We don't have to ask experts for the same concepts self._checked_concepts: typing.Set[int] = set() # Minimum similarity between vectors is -1 self._min_threshold: float = -1 # Maximum similarity between vectors is 1 self._max_threshold: float = 1 # Let's start from the highest similarity to ensure minimum amount of steps self.threshold_: float = 1 </code></pre> <ul> <li><code>samples</code> argument describes how many examples will be shown to an expert during each iteration (it is the maximum, it will return less if samples were already asked for or there is not enough of them to show).</li> <li><code>step</code> represents the drop of threshold (we start at 1 meaning perfect similarity) in each iteration.</li> <li><code>change_multiplier</code> - if an expert answers concepts are not related (or mostly unrelated, as multiple of them are returned), step is multiplied by this floating point number. It is used to pinpoint exact threshold between <code>step</code> changes at each iteration.</li> <li>concepts are sorted based on their similarity (the more similar a concept is, the higher)</li> </ul> <p>Function below asks expert for an opinion and find optimal threshold based on his answers.</p> <pre><code>def _ask_expert(self, available_concepts_indices): # Get random concepts (the ones above the threshold) concepts_to_show = set( np.random.choice( available_concepts_indices, len(available_concepts_indices) ).tolist() ) # Remove those already presented to an expert concepts_to_show = concepts_to_show - self._checked_concepts self._checked_concepts.update(concepts_to_show) # Print message for an expert and concepts to be classified if concepts_to_show: print("\nAre those concepts related to medicine?\n") print( "\n".join( f"{i}. {concept}" for i, concept in enumerate( self.concepts[list(concepts_to_show)[: self.samples]] ) ), "\n", ) return input("[y]es / [n]o / [any]quit ") return "y" </code></pre> <p>Example question looks like this:</p> <pre><code>Are those concepts related to medicine? 0. anesthetic drug 1. child and adolescent psychiatry 2. tertiary care center 3. sex therapy 4. drug design 5. pain disorder 6. psychiatric rehabilitation 7. combined oral contraceptive 8. family practitioner committee 9. cancer family syndrome 10. social psychology 11. drug sale 12. blood system [y]es / [n]o / [any]quit y </code></pre> <p>... parsing an answer from expert:</p> <pre><code># True - keep asking, False - stop the algorithm def _parse_expert_decision(self, decision) -&gt; bool: if decision.lower() == "y": # You can't go higher as current threshold is related to medicine self._max_threshold = self.threshold_ if self.threshold_ - self.step &lt; self._min_threshold: return False # Lower the threshold self.threshold_ -= self.step return True if decision.lower() == "n": # You can't got lower than this, as current threshold is not related to medicine already self._min_threshold = self.threshold_ # Multiply threshold to pinpoint exact spot self.step *= self.change_multiplier if self.threshold_ + self.step &lt; self._max_threshold: return False # Lower the threshold self.threshold_ += self.step return True return False </code></pre> <p>And finally whole code code of <code>ActiveLearner</code>, which finds optimal threshold of similiarity accordingly to expert:</p> <pre><code>class ActiveLearner: def __init__( self, concepts, concepts_similarity, samples: int, max_steps: int, step: float = 0.05, change_multiplier: float = 0.7, ): sorting_indices = np.argsort(-concepts_similarity) self.concepts = concepts[sorting_indices] self.concepts_similarity = concepts_similarity[sorting_indices] self.samples: int = samples self.max_steps: int = max_steps self.step: float = step self.change_multiplier: float = change_multiplier # We don't have to ask experts for the same concepts self._checked_concepts: typing.Set[int] = set() # Minimum similarity between vectors is -1 self._min_threshold: float = -1 # Maximum similarity between vectors is 1 self._max_threshold: float = 1 # Let's start from the highest similarity to ensure minimum amount of steps self.threshold_: float = 1 def _ask_expert(self, available_concepts_indices): # Get random concepts (the ones above the threshold) concepts_to_show = set( np.random.choice( available_concepts_indices, len(available_concepts_indices) ).tolist() ) # Remove those already presented to an expert concepts_to_show = concepts_to_show - self._checked_concepts self._checked_concepts.update(concepts_to_show) # Print message for an expert and concepts to be classified if concepts_to_show: print("\nAre those concepts related to medicine?\n") print( "\n".join( f"{i}. {concept}" for i, concept in enumerate( self.concepts[list(concepts_to_show)[: self.samples]] ) ), "\n", ) return input("[y]es / [n]o / [any]quit ") return "y" # True - keep asking, False - stop the algorithm def _parse_expert_decision(self, decision) -&gt; bool: if decision.lower() == "y": # You can't go higher as current threshold is related to medicine self._max_threshold = self.threshold_ if self.threshold_ - self.step &lt; self._min_threshold: return False # Lower the threshold self.threshold_ -= self.step return True if decision.lower() == "n": # You can't got lower than this, as current threshold is not related to medicine already self._min_threshold = self.threshold_ # Multiply threshold to pinpoint exact spot self.step *= self.change_multiplier if self.threshold_ + self.step &lt; self._max_threshold: return False # Lower the threshold self.threshold_ += self.step return True return False def fit(self): for _ in range(self.max_steps): available_concepts_indices = np.nonzero( self.concepts_similarity &gt;= self.threshold_ )[0] if available_concepts_indices.size != 0: decision = self._ask_expert(available_concepts_indices) if not self._parse_expert_decision(decision): break else: self.threshold_ -= self.step return self </code></pre> <p>All in all, you would have to answer some questions manually but this approach is <strong>way more</strong> accurate in my opinion. </p> <p>Furthermore, you don't have to go through all of the samples, just a small subset of it. You may decide how many samples constitute a medical term (whether 40 medical samples and 10 non-medical samples shown, should still be considered medical?), which let's you fine-tune this approach to your preferences. If there is an outlier (say, 1 sample out of 50 is non-medical), I would consider the threshold to still be valid.</p> <p><strong>Once again:</strong> This approach should be mixed with others in order to minimalize the chance for wrong classification.</p> <h2>Classifier</h2> <p>When we obtain the threshold from expert, classification would be instantenous, here is a simple class for classification:</p> <pre><code>class Classifier: def __init__(self, centroid, threshold: float): self.centroid = centroid self.threshold: float = threshold def predict(self, concepts_pipe): predictions = [] for concept in concepts_pipe: predictions.append(self.centroid.similarity(concept) &gt; self.threshold) return predictions </code></pre> <p>And for brevity, here is the final source code:</p> <pre><code>import json import typing import numpy as np import spacy class Similarity: def __init__(self, centroid, nlp, n_threads: int, batch_size: int): # In our case it will be medicine self.centroid = centroid # spaCy's Language model (english), which will be used to return similarity to # centroid of each concept self.nlp = nlp self.n_threads: int = n_threads self.batch_size: int = batch_size self.missing: typing.List[int] = [] def __call__(self, concepts): concepts_similarity = [] # nlp.pipe is faster for many documents and can work in parallel (not blocked by GIL) for i, concept in enumerate( self.nlp.pipe( concepts, n_threads=self.n_threads, batch_size=self.batch_size ) ): if concept.has_vector: concepts_similarity.append(self.centroid.similarity(concept)) else: # If document has no vector, it's assumed to be totally dissimilar to centroid concepts_similarity.append(-1) self.missing.append(i) return np.array(concepts_similarity) class ActiveLearner: def __init__( self, concepts, concepts_similarity, samples: int, max_steps: int, step: float = 0.05, change_multiplier: float = 0.7, ): sorting_indices = np.argsort(-concepts_similarity) self.concepts = concepts[sorting_indices] self.concepts_similarity = concepts_similarity[sorting_indices] self.samples: int = samples self.max_steps: int = max_steps self.step: float = step self.change_multiplier: float = change_multiplier # We don't have to ask experts for the same concepts self._checked_concepts: typing.Set[int] = set() # Minimum similarity between vectors is -1 self._min_threshold: float = -1 # Maximum similarity between vectors is 1 self._max_threshold: float = 1 # Let's start from the highest similarity to ensure minimum amount of steps self.threshold_: float = 1 def _ask_expert(self, available_concepts_indices): # Get random concepts (the ones above the threshold) concepts_to_show = set( np.random.choice( available_concepts_indices, len(available_concepts_indices) ).tolist() ) # Remove those already presented to an expert concepts_to_show = concepts_to_show - self._checked_concepts self._checked_concepts.update(concepts_to_show) # Print message for an expert and concepts to be classified if concepts_to_show: print("\nAre those concepts related to medicine?\n") print( "\n".join( f"{i}. {concept}" for i, concept in enumerate( self.concepts[list(concepts_to_show)[: self.samples]] ) ), "\n", ) return input("[y]es / [n]o / [any]quit ") return "y" # True - keep asking, False - stop the algorithm def _parse_expert_decision(self, decision) -&gt; bool: if decision.lower() == "y": # You can't go higher as current threshold is related to medicine self._max_threshold = self.threshold_ if self.threshold_ - self.step &lt; self._min_threshold: return False # Lower the threshold self.threshold_ -= self.step return True if decision.lower() == "n": # You can't got lower than this, as current threshold is not related to medicine already self._min_threshold = self.threshold_ # Multiply threshold to pinpoint exact spot self.step *= self.change_multiplier if self.threshold_ + self.step &lt; self._max_threshold: return False # Lower the threshold self.threshold_ += self.step return True return False def fit(self): for _ in range(self.max_steps): available_concepts_indices = np.nonzero( self.concepts_similarity &gt;= self.threshold_ )[0] if available_concepts_indices.size != 0: decision = self._ask_expert(available_concepts_indices) if not self._parse_expert_decision(decision): break else: self.threshold_ -= self.step return self class Classifier: def __init__(self, centroid, threshold: float): self.centroid = centroid self.threshold: float = threshold def predict(self, concepts_pipe): predictions = [] for concept in concepts_pipe: predictions.append(self.centroid.similarity(concept) &gt; self.threshold) return predictions if __name__ == "__main__": nlp = spacy.load("en_vectors_web_lg") centroid = nlp("medicine") concepts = json.load(open("concepts_new.txt")) concepts_similarity = Similarity(centroid, nlp, n_threads=-1, batch_size=4096)( concepts ) learner = ActiveLearner( np.array(concepts), concepts_similarity, samples=20, max_steps=50 ).fit() print(f"Found threshold {learner.threshold_}\n") classifier = Classifier(centroid, learner.threshold_) pipe = nlp.pipe(concepts, n_threads=-1, batch_size=4096) predictions = classifier.predict(pipe) print( "\n".join( f"{concept}: {label}" for concept, label in zip(concepts[20:40], predictions[20:40]) ) ) </code></pre> <p>After answering some questions, with threshold 0.1 (everything between <code>[-1, 0.1)</code> is considered non-medical, while <code>[0.1, 1]</code> is considered medical) I got the following results:</p> <pre><code>kartagener s syndrome: True summer season: True taq: False atypical neuroleptic: True anterior cingulate: False acute respiratory distress syndrome: True circularity: False mutase: False adrenergic blocking drug: True systematic desensitization: True the turning point: True 9l: False pyridazine: False bisoprolol: False trq: False propylhexedrine: False type 18: True darpp 32: False rickettsia conorii: False sport shoe: True </code></pre> <p>As you can see this approach is far from perfect, so the last section described possible improvements:</p> <h1>Possible improvements</h1> <p>As mentioned in the beginning using my approach mixed with other answers would probably leave out ideas like <code>sport shoe</code> belonging to <code>medicine</code> out and active learning approach would be more of a decisive vote in case of a draw between two heuristics mentioned above.</p> <p>We could create an active learning ensemble as well. Instead of one threshold, say 0.1, we would use multiple of them (either increasing or decreasing), let's say those are <code>0.1, 0.2, 0.3, 0.4, 0.5</code>.</p> <p>Let's say <code>sport shoe</code> gets, for each threshold it's respective <code>True/False</code> like this:</p> <p><code>True True False False False</code>,</p> <p>Making a majority voting we would mark it <code>non-medical</code> by 3 out of 2 votes. Furthermore, too strict threshold would me mitigated as well if thresholds below it out-vote it (case if <code>True/False</code> would look like this: <code>True True True False False</code>).</p> <p><strong>Final possible improvement I came up with</strong>: In the code above I'm using <code>Doc</code> vector, which is a mean of word vectors creating the concept. Say one word is missing (vectors consisting of zeros), in such case, it would be pushed further away from <code>medicine</code> centroid. You may not want that (as some niche medical terms [abbreviations like <code>gpv</code> or others] might be missing their representation), in such case you could average only those vectors which are different from zero. </p> <p>I know this post is quite lengthy, so if you have any questions post them below.</p>
2019-02-19 19:49:06.923000+00:00
2019-02-20 11:58:18.507000+00:00
2019-02-20 11:58:18.507000+00:00
null
54,625,493
<p>For each concept of my dataset I have stored the corresponding wikipedia categories. For example, consider the following 5 concepts and their corresponding wikipedia categories.</p> <ul> <li>hypertriglyceridemia: <code>['Category:Lipid metabolism disorders', 'Category:Medical conditions related to obesity']</code></li> <li>enzyme inhibitor: <code>['Category:Enzyme inhibitors', 'Category:Medicinal chemistry', 'Category:Metabolism']</code></li> <li>bypass surgery: <code>['Category:Surgery stubs', 'Category:Surgical procedures and techniques']</code></li> <li>perth: <code>['Category:1829 establishments in Australia', 'Category:Australian capital cities', 'Category:Metropolitan areas of Australia', 'Category:Perth, Western Australia', 'Category:Populated places established in 1829']</code></li> <li>climate: <code>['Category:Climate', 'Category:Climatology', 'Category:Meteorological concepts']</code></li> </ul> <p>As you can see, the first three concepts belong to medical domain (whereas the remaining two terms are not medical terms).</p> <p>More precisely, I want to divide my concepts as medical and non-medical. However, it is very difficult to divide the concepts using the categories alone. For example, even though the two concepts <code>enzyme inhibitor</code> and <code>bypass surgery</code> are in medical domain, their categories are very different to each other.</p> <p>Therefore, I would like to know if there is a way to obtain the <code>parent category</code> of the categories (for example, the categories of <code>enzyme inhibitor</code> and <code>bypass surgery</code> belong to <code>medical</code> parent category)</p> <p>I am currently using <code>pymediawiki</code> and <code>pywikibot</code>. However, I am not restricted to only those two libraries and happy to have solutions using other libraries as well.</p> <p><strong>EDIT</strong></p> <p>As suggested by @IlmariKaronen I am also using the <code>categories of categories</code> and the results I got is as follows (T<em>he small font near the <code>category</code> is the <code>categories of the category</code></em>). <a href="https://i.stack.imgur.com/oSPla.png" rel="noreferrer"><img src="https://i.stack.imgur.com/oSPla.png" alt="enter image description here"></a></p> <p>However, I still could not find a way to use these category details to decide if a given term is a medical or non-medical.</p> <p>Moreover, as pointed by @IlmariKaronen using <code>Wikiproject</code> details can be potential. However, it seems like the <code>Medicine</code> wikiproject do not seem to have all the medical terms. Therefore we also need to check other wikiprojects as well.</p> <p><strong>EDIT:</strong> My current code of extracting categories from wikipedia concepts is as follows. This could be done using <code>pywikibot</code> or <code>pymediawiki</code> as follows.</p> <ol> <li><p>Using the librarary <code>pymediawiki</code></p> <p>import mediawiki as pw</p> <pre><code>p = wikipedia.page('enzyme inhibitor') print(p.categories) </code></pre></li> <li><p>Using the library <code>pywikibot</code></p> <pre><code>import pywikibot as pw site = pw.Site('en', 'wikipedia') print([ cat.title() for cat in pw.Page(site, 'support-vector machine').categories() if 'hidden' not in cat.categoryinfo ]) </code></pre></li> </ol> <p>The categories of categories can also be done in the same way as shown in the answer by @IlmariKaronen.</p> <p>If you are looking for longer list of concepts for testing I have mentioned more examples below.</p> <pre><code>['juvenile chronic arthritis', 'climate', 'alexidine', 'mouthrinse', 'sialosis', 'australia', 'artificial neural network', 'ricinoleic acid', 'bromosulfophthalein', 'myelosclerosis', 'hydrochloride salt', 'cycasin', 'aldosterone antagonist', 'fungal growth', 'describe', 'liver resection', 'coffee table', 'natural language processing', 'infratemporal fossa', 'social withdrawal', 'information retrieval', 'monday', 'menthol', 'overturn', 'prevailing', 'spline function', 'acinic cell carcinoma', 'furth', 'hepatic protein', 'blistering', 'prefixation', 'january', 'cardiopulmonary receptor', 'extracorporeal membrane oxygenation', 'clinodactyly', 'melancholic', 'chlorpromazine hydrochloride', 'level of evidence', 'washington state', 'cat', 'newyork', 'year elevan', 'trituration', 'gold alloy', 'hexoprenaline', 'second molar', 'novice', 'oxygen radical', 'subscription', 'ordinate', 'approximal', 'spongiosis', 'ribothymidine', 'body of evidence', 'vpb', 'porins', 'musculocutaneous'] </code></pre> <p>For a very long list please check the link below. <a href="https://docs.google.com/document/d/1BYllMyDlw-Rb4uMh89VjLml2Bl9Y7oUlopM-Z4F6pN0/edit?usp=sharing" rel="noreferrer">https://docs.google.com/document/d/1BYllMyDlw-Rb4uMh89VjLml2Bl9Y7oUlopM-Z4F6pN0/edit?usp=sharing</a></p> <p><strong>NOTE: I am not expecting the solution to work 100% (if the proposed algorithm is able to detect many of the medical concepts that is enough for me)</strong></p> <p>I am happy to provide more details if needed.</p>
2019-02-11 07:10:14.060000+00:00
2019-02-20 11:58:18.507000+00:00
2019-02-17 13:28:30.303000+00:00
python|mediawiki|wikipedia|wikipedia-api|mediawiki-api
['https://stackoverflow.com/a/54757134/10886420', 'https://stackoverflow.com/users/10317656/tavoglc', 'https://stackoverflow.com/users/7879193/stanislav-kralin', 'https://stackoverflow.com/a/54781366/10886420', 'https://stackoverflow.com/users/10554298/meena-nagarajan', 'https://stackoverflow.com/users/10953776/anand-v-singh', 'https://stackoverflow.com/a/54721431/10886420', 'https://spacy.io/', 'https://en.wikipedia.org/wiki/Active_learning_(machine_learning)', 'https://stackoverflow.com/users/10953776/anand-v-singh', 'https://arxiv.org/pdf/1810.04805.pdf', 'https://spacy.io/usage/models', 'https://spacy.io/models/en#en_vectors_web_lg', 'https://en.wikipedia.org/wiki/False_positives_and_false_negatives']
14
49,861,122
<p>Sorry I'm not an expert. Since there hasn't been a response on this and if you are still looking, the vocabulary I would use to describe this type of problem is <a href="https://arxiv.org/pdf/1703.04309.pdf" rel="nofollow noreferrer">disparity networks</a> and segmentation. Your best bet may be a specific type of disparity network: <a href="https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/" rel="nofollow noreferrer">U-net</a> </p>
2018-04-16 15:34:42.850000+00:00
2018-04-16 15:34:42.850000+00:00
null
null
47,871,915
<p>I'm about to start developing a neural net here with Tensorflow, but before I get into it too deep, I was hoping I could get some feedback on exactly what type of neural net I will need for this (If a net is the right way to go about this at all) </p> <p>I need the NN to input an image, and output another image. This will be used for path-mapping on a robot I'm working on. The input image will be a <a href="https://stackoverflow.com/questions/17607312/difference-between-disparity-map-and-disparity-image-in-stereo-matching">disparity map</a>, and the output will be a "driveable map" (an image that displays what in the scene can be driven on, and what can't)</p> <p>I have built a dataset using Unity 3d. Here is an example from the set:</p> <p>disparity map</p> <p><a href="https://i.stack.imgur.com/s6NK6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s6NK6.png" alt="Disparity Map"></a></p> <p>driveable map:</p> <p><a href="https://i.stack.imgur.com/UsYlR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsYlR.png" alt="Driveable map"></a></p> <p>As you can probably see, white represents the area where my robot can drive and black is where it can't. I will need the NN to take a disparity map, and give me back a "driveable map". Can this be done? Thanks!</p>
2017-12-18 15:50:33.760000+00:00
2018-04-16 15:34:42.850000+00:00
null
tensorflow|computer-vision
['https://arxiv.org/pdf/1703.04309.pdf', 'https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/']
2
33,906,936
<p><a href="http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf" rel="nofollow">TnT tagger's paper</a> presents an efficient approach for tagging unknown words. </p> <p>Another approach using a lexicon to handle unknown words can be found in <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this article</a>. The article shows that the lexicon-based approach obtains promising tagging results of unknown words in comparison to TnT's on 13 languages, including Bulgarian, Czech, Dutch, English, French, German, Hindi, Italian, Portuguese, Spanish, Swedish, Thai and Vietnamese. You can also find in the article accuracy results (for known words and unknown words) of TnT and other two POS and morphological taggers on the 13 languages. </p>
2015-11-25 01:21:22.457000+00:00
2015-11-25 02:30:34.107000+00:00
2015-11-25 02:30:34.107000+00:00
null
16,643,018
<p>what are the different between part of speech tagging for unknown words and part of speech tagging for known words. Is there any tool that can predict part of speech tagging for the words ..</p>
2013-05-20 05:15:14.753000+00:00
2015-11-25 02:30:34.107000+00:00
2013-05-29 06:16:23.427000+00:00
nlp|stanford-nlp|oov
['http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf', 'http://arxiv.org/abs/1412.4021']
2
42,909,035
<p>From the DCGAN paper [Radford et al. <a href="https://arxiv.org/pdf/1511.06434.pdf]" rel="noreferrer">https://arxiv.org/pdf/1511.06434.pdf]</a>...</p> <p>"The ReLU activation (Nair &amp; Hinton, 2010) is used in the generator with the exception of the output layer which uses the Tanh function. We observed that using a bounded activation allowed the model to learn more quickly to saturate and cover the color space of the training distribution. Within the discriminator we found the leaky rectified activation (Maas et al., 2013) (Xu et al., 2015) to work well, especially for higher resolution modeling. This is in contrast to the original GAN paper, which used the maxout activation (Goodfellow et al., 2013)."</p> <p>It could be that the symmetry of tanh is an advantage here, since the network should be treating darker colours and lighter colours in a symmetric way. </p>
2017-03-20 16:32:52.840000+00:00
2017-03-20 16:56:56.860000+00:00
2017-03-20 16:56:56.860000+00:00
null
41,489,907
<p>I was wondering, why in most of the models of GAN (in MNIST at least) I saw, the activation function (for the discriminator and the generator) was tanh ? Isn't ReLu more efficient ? (I always read that for predictive networks)</p> <p>Thanks!</p>
2017-01-05 16:25:37.940000+00:00
2017-03-20 21:05:33.540000+00:00
2017-03-20 21:05:33.540000+00:00
tensorflow|neural-network|deep-learning
['https://arxiv.org/pdf/1511.06434.pdf]']
1
46,359,504
<p>First of all you have to get music_tagger_cnn.py and put it in the project path. After that you can build your model:</p> <pre><code>from music_tagger_cnn import * input_tensor = Input(shape=(1, 18, 119)) model =MusicTaggerCNN(input_tensor=input_tensor, include_top=False, weights='msd') </code></pre> <p>You can change the input tensor by the dimension you want... I usually use Theano dim ordering but Tensorflow as backend, so that's why:</p> <pre><code>from keras import backend as K K.set_image_dim_ordering('th') </code></pre> <p>Using Theano dim ordering you hav to take into account that the order of the sample's dimensions have to be changed</p> <pre><code>X_train = X_train.transpose(0, 3, 2, 1) X_val = X_val.transpose(0, 3, 2, 1) </code></pre> <p>After that you have to freeze these layers that you don't want to be updated</p> <pre><code>for layer in model.layers: layer.trainable = False </code></pre> <p>Now you can set your own output, for example:</p> <pre><code>last_layer = model.get_layer('pool3').output out = Flatten()(last_layer) out = Dense(128, activation='relu', name='fc2')(out) out = Dropout(0.5)(out) out = Dense(n_classes, activation='softmax', name='fc3')(out) model = Model(input=model.input, output=out) </code></pre> <p>After that you have to be able to train it just doing:</p> <pre><code>sgd = SGD(lr=0.01, momentum=0, decay=0.002, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) history = model.fit(X_train, labels_train, validation_data=(X_val, labels_val), nb_epoch=100, batch_size=5) </code></pre> <p>Note that labels should be in one-hot encoding</p> <p>I hope it will help!!</p> <p>Update: Posting code so I can get help debugging these lines and prevent a crash. </p> <pre><code>input_tensor = Input(shape=(3, 640, 480)) model = MusicTaggerCNN(input_tensor=input_tensor, include_top=False, weights='msd') for layer in model.layers: layer.trainable = False last_layer = model.get_layer('pool3').output out = Flatten()(last_layer) out = Dense(128, activation='relu', name='fc2')(out) out = Dropout(0.5)(out) out = Dense(n_classes, activation='softmax', name='fc3')(out) model = Model(input=model.input, output=out) sgd = SGD(lr=0.01, momentum=0, decay=0.002, nesterov=True) model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy']) history = model.fit(X_train, labels_train, validation_data=(X_test, Y_test), nb_epoch=100, batch_size=5) </code></pre> <p>EDIT # 2 </p> <pre><code> # -*- coding: utf-8 -*- '''MusicTaggerCNN model for Keras. # Reference: - [Automatic tagging using deep convolutional neural networks](https://arxiv.org/abs/1606.00298) - [Music-auto_tagging-keras](https://github.com/keunwoochoi/music-auto_tagging-keras) ''' from __future__ import print_function from __future__ import absolute_import from keras import backend as K from keras.layers import Input, Dense from keras.models import Model from keras.layers import Dense, Dropout, Flatten from keras.layers.convolutional import Convolution2D from keras.layers.convolutional import MaxPooling2D, ZeroPadding2D from keras.layers.normalization import BatchNormalization from keras.layers.advanced_activations import ELU from keras.utils.data_utils import get_file from keras.layers import Input, Dense TH_WEIGHTS_PATH = 'https://github.com/keunwoochoi/music-auto_tagging-keras/blob/master/data/music_tagger_cnn_weights_theano.h5' TF_WEIGHTS_PATH = 'https://github.com/keunwoochoi/music-auto_tagging-keras/blob/master/data/music_tagger_cnn_weights_tensorflow.h5' def MusicTaggerCNN(weights='msd', input_tensor=None, include_top=True): '''Instantiate the MusicTaggerCNN architecture, optionally loading weights pre-trained on Million Song Dataset. Note that when using TensorFlow, for best performance you should set `image_dim_ordering="tf"` in your Keras config at ~/.keras/keras.json. The model and the weights are compatible with both TensorFlow and Theano. The dimension ordering convention used by the model is the one specified in your Keras config file. For preparing mel-spectrogram input, see `audio_conv_utils.py` in [applications](https://github.com/fchollet/keras/tree/master/keras/applications). You will need to install [Librosa](http://librosa.github.io/librosa/) to use it. # Arguments weights: one of `None` (random initialization) or "msd" (pre-training on ImageNet). input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model. include_top: whether to include the 1 fully-connected layer (output layer) at the top of the network. If False, the network outputs 256-dim features. # Returns A Keras model instance. ''' if weights not in {'msd', None}: raise ValueError('The `weights` argument should be either ' '`None` (random initialization) or `msd` ' '(pre-training on Million Song Dataset).') # Determine proper input shape if K.image_dim_ordering() == 'th': input_shape = (3, 640, 480) else: input_shape = (3, 640, 480) if input_tensor is None: melgram_input = Input(shape=input_shape) else: if not K.is_keras_tensor(input_tensor): melgram_input = Input(tensor=input_tensor, shape=input_shape) else: melgram_input = input_tensor # Determine input axis if K.image_dim_ordering() == 'th': channel_axis = 1 freq_axis = 2 time_axis = 3 else: channel_axis = 3 freq_axis = 1 time_axis = 2 # Input block x = BatchNormalization(axis=freq_axis, name='bn_0_freq')(melgram_input) # Conv block 1 x = Convolution2D(64, 3, 3, border_mode='same', name='conv1')(x) x = BatchNormalization(axis=channel_axis, mode=0, name='bn1')(x) x = ELU()(x) x = MaxPooling2D(pool_size=(2, 4), name='pool1')(x) # Conv block 2 x = Convolution2D(128, 3, 3, border_mode='same', name='conv2')(x) x = BatchNormalization(axis=channel_axis, mode=0, name='bn2')(x) x = ELU()(x) x = MaxPooling2D(pool_size=(2, 4), name='pool2')(x) # Conv block 3 x = Convolution2D(128, 3, 3, border_mode='same', name='conv3')(x) x = BatchNormalization(axis=channel_axis, mode=0, name='bn3')(x) x = ELU()(x) x = MaxPooling2D(pool_size=(2, 4), name='pool3')(x) # Output x = Flatten()(x) if include_top: x = Dense(50, activation='sigmoid', name='output')(x) # Create model model = Model(melgram_input, x) if weights is None: return model else: # Load input if K.image_dim_ordering() == 'tf': raise RuntimeError("Please set image_dim_ordering == 'th'." "You can set it at ~/.keras/keras.json") model.load_weights('data/music_tagger_cnn_weights_%s.h5' % K._BACKEND, by_name=True) return model </code></pre> <p>EDIT #3</p> <p>I tried the keras example for using the MusicTaggerCRNN as a feature extractor of the melgrams. Then i trained a simple NN with 2 Dense layers and a binary output. The samples taken in my example don't apply in your case but it's also a binary classifier I used <code>keras==1.2.2</code> and <code>tensorflow-gpu==1.0.0</code> and works for me.</p> <p>Here's the code:</p> <pre><code>from keras.applications.music_tagger_crnn import MusicTaggerCRNN from keras.applications.music_tagger_crnn import preprocess_input, decode_predictions import numpy as np from keras.layers import Input, Dense from keras.models import Model from keras.layers import Dense, Dropout, Flatten from keras.optimizers import SGD model = MusicTaggerCRNN(weights='msd', include_top=False) #Samples simulation audio_paths_train = ['data/genres/blues/blues.00000.au','data/genres/classical/classical.00000.au','data/genres/classical/classical.00002.au', 'data/genres/blues/blues.00003.au'] audio_paths_test = ['data/genres/blues/blues.00001.au', 'data/genres/classical/classical.00001.au', 'data/genres/blues/blues.00002.au', 'data/genres/classical/classical.00003.au'] labels_train = [0,1,1,0] labels_test = [0, 1, 0, 1] melgrams_train = [preprocess_input(audio_path) for audio_path in audio_paths_train] melgrams_test = [preprocess_input(audio_path) for audio_path in audio_paths_test] feats_train = [model.predict(np.expand_dims(melgram, axis=0)) for melgram in melgrams_train] feats_test = [model.predict(np.expand_dims(melgram, axis=0)) for melgram in melgrams_test] feats_train = np.array(feats_train) feats_test = np.array(feats_test) _input = Input(shape=(1,32)) x = Flatten(name='flatten')(_input) x = Dense(128, activation='relu', name='fc6')(x) x = Dense(64, activation='relu', name='fc7')(x) x = Dense(1, activation='softmax', name='fc8')(x) class_model = Model(_input, x) sgd = SGD(lr=0.01, momentum=0, decay=0.02, nesterov=True) class_model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy']) history = class_model.fit(feats_train, labels_train, validation_data=(feats_test, labels_test), nb_epoch=100, batch_size=5, class_weight='auto') print(history.history['acc']) # Final evaluation of the model scores = class_model.evaluate(feats_test, labels_test, verbose=0) print("Accuracy: %.2f%%" % (scores[1] * 100)) </code></pre>
2017-09-22 07:48:10.330000+00:00
2017-09-27 14:24:10.300000+00:00
2017-09-27 14:24:10.300000+00:00
null
46,315,258
<p>I am trying to increase my validation accuracy of my CNN from 76% (currently) to over 90%. I am going to show all of the information about my CNN's performance and configuration below.</p> <p>In essence, I want my CNN to distinguish between two classes of mel-spectrograms:</p> <p><strong>Class # 1</strong> <a href="https://i.stack.imgur.com/HmcVx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HmcVx.jpg" alt="class # 1"></a> <strong>Class # 2</strong> <a href="https://i.stack.imgur.com/0v21W.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0v21W.jpg" alt="enter image description here"></a> <strong>Here is the graph of accuracy vs epoch:</strong></p> <p><a href="https://i.stack.imgur.com/qVnV6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qVnV6.png" alt="enter image description here"></a></p> <p><strong>Here is the graph of loss vs. epoch</strong></p> <p><a href="https://i.stack.imgur.com/crzIC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/crzIC.png" alt="enter image description here"></a></p> <p>Finally, here is the model architecture configuration</p> <pre><code>model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=(3, 640, 480))) model.add(Conv2D(64, (3, 3), activation='relu', dim_ordering="th")) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(2, activation='softmax')) </code></pre> <p>Here are my calls to model.compile() and model.fit()</p> <pre><code>model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.SGD(lr=0.001), metrics=['accuracy']) print("Compiled model") history = model.fit(X_train, Y_train, batch_size=8, epochs=50, verbose=1, validation_data=(X_test, Y_test)) </code></pre> <p><em>How can I change my CNN configuration to increase the validation accuracy score?</em></p> <p>Things I have tried:</p> <ol> <li>Decrease the learning rate to prevent sporadic fluctuations in the accuracy.</li> <li>Decrease the batch_size from 64 down to 8.</li> <li>Increase the number of epochs to 50(However not sure if this is enough).</li> </ol> <p>Any help would be greatly appreciated!</p> <p><strong>UPDATE #1</strong> I increase the number of epochs to 200, and after letting the program run overnight I got a validated accuracy score of around 76.31%</p> <p>I am posting a picture of accuracy vs. epoch and loss vs. epoch below</p> <p><a href="https://i.stack.imgur.com/3HftY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3HftY.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/12dK6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/12dK6.png" alt="enter image description here"></a></p> <p>What else specifically about my model architecture can I change to get better accuracy?</p>
2017-09-20 06:59:43.167000+00:00
2022-03-12 11:10:35.240000+00:00
2017-09-20 22:28:43.540000+00:00
python-3.x|tensorflow|deep-learning|keras|spectrogram
[]
0
59,915,329
<p>Without loss of generality, the problem of generating random integers on [a, b] can be reduced to the problem of generating random integers on [0, s). The state of the art for generating random integers on a bounded range from a uniform PRNG is represented by the following recent publication:</p> <p>Daniel Lemire,"Fast Random Integer Generation in an Interval." <em>ACM Trans. Model. Comput. Simul.</em> 29, 1, Article 3 (January 2019) (<a href="https://arxiv.org/abs/1805.10941" rel="nofollow noreferrer">ArXiv draft</a>)</p> <p>Lemire shows that his algorithm provides unbiased results, and motivated by the growing popularity of very fast high-quality PRNGs such as Melissa O'Neill's <a href="http://www.pcg-random.org/" rel="nofollow noreferrer">PCG generators</a>, shows how to the results can be computed fast, avoiding slow division operations almost all of the time. </p> <p>An exemplary ISO-C implementation of his algorithm is shown in <code>randint()</code> below. Here I demonstrate it in conjunction with George Marsaglia's older <a href="https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J" rel="nofollow noreferrer">KISS64</a> PRNG. For performance reasons, the required 64×64→128 bit unsigned multiplication is typically best implemented via machine-specific intrinsics or inline assembly that map directly to appropriate hardware instructions.</p> <pre class="lang-c prettyprint-override"><code>#include &lt;stdio.h&gt; #include &lt;stdlib.h&gt; #include &lt;stdint.h&gt; /* PRNG state */ typedef struct Prng_T *Prng_T; /* Returns uniformly distributed integers in [0, 2**64-1] */ uint64_t random64 (Prng_T); /* Multiplies two 64-bit factors into a 128-bit product */ void umul64wide (uint64_t, uint64_t, uint64_t *, uint64_t *); /* Generate in bias-free manner a random integer in [0, s) with Lemire's fast algorithm that uses integer division only rarely. s must be in [0, 2**64-1]. Daniel Lemire, "Fast Random Integer Generation in an Interval," ACM Trans. Model. Comput. Simul. 29, 1, Article 3 (January 2019) */ uint64_t randint (Prng_T prng, uint64_t s) { uint64_t x, h, l, t; x = random64 (prng); umul64wide (x, s, &amp;h, &amp;l); if (l &lt; s) { t = (0 - s) % s; while (l &lt; t) { x = random64 (prng); umul64wide (x, s, &amp;h, &amp;l); } } return h; } #define X86_INLINE_ASM (0) /* Multiply two 64-bit unsigned integers into a 128 bit unsined product. Return the least significant 64 bist of the product to the location pointed to by lo, and the most signfiicant 64 bits of the product to the location pointed to by hi. */ void umul64wide (uint64_t a, uint64_t b, uint64_t *hi, uint64_t *lo) { #if X86_INLINE_ASM uint64_t l, h; __asm__ ( "movq %2, %%rax;\n\t" // rax = a "mulq %3;\n\t" // rdx:rax = a * b "movq %%rax, %0;\n\t" // l = (a * b)&lt;31:0&gt; "movq %%rdx, %1;\n\t" // h = (a * b)&lt;63:32&gt; : "=r"(l), "=r"(h) : "r"(a), "r"(b) : "%rax", "%rdx"); *lo = l; *hi = h; #else // X86_INLINE_ASM uint64_t a_lo = (uint64_t)(uint32_t)a; uint64_t a_hi = a &gt;&gt; 32; uint64_t b_lo = (uint64_t)(uint32_t)b; uint64_t b_hi = b &gt;&gt; 32; uint64_t p0 = a_lo * b_lo; uint64_t p1 = a_lo * b_hi; uint64_t p2 = a_hi * b_lo; uint64_t p3 = a_hi * b_hi; uint32_t cy = (uint32_t)(((p0 &gt;&gt; 32) + (uint32_t)p1 + (uint32_t)p2) &gt;&gt; 32); *lo = p0 + (p1 &lt;&lt; 32) + (p2 &lt;&lt; 32); *hi = p3 + (p1 &gt;&gt; 32) + (p2 &gt;&gt; 32) + cy; #endif // X86_INLINE_ASM } /* George Marsaglia's KISS64 generator, posted to comp.lang.c on 28 Feb 2009 https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J */ struct Prng_T { uint64_t x, c, y, z, t; }; struct Prng_T kiss64 = {1234567890987654321ULL, 123456123456123456ULL, 362436362436362436ULL, 1066149217761810ULL, 0ULL}; /* KISS64 state equations */ #define MWC64 (kiss64-&gt;t = (kiss64-&gt;x &lt;&lt; 58) + kiss64-&gt;c, \ kiss64-&gt;c = (kiss64-&gt;x &gt;&gt; 6), kiss64-&gt;x += kiss64-&gt;t, \ kiss64-&gt;c += (kiss64-&gt;x &lt; kiss64-&gt;t), kiss64-&gt;x) #define XSH64 (kiss64-&gt;y ^= (kiss64-&gt;y &lt;&lt; 13), kiss64-&gt;y ^= (kiss64-&gt;y &gt;&gt; 17), \ kiss64-&gt;y ^= (kiss64-&gt;y &lt;&lt; 43)) #define CNG64 (kiss64-&gt;z = 6906969069ULL * kiss64-&gt;z + 1234567ULL) #define KISS64 (MWC64 + XSH64 + CNG64) uint64_t random64 (Prng_T kiss64) { return KISS64; } int main (void) { int i; Prng_T state = &amp;kiss64; for (i = 0; i &lt; 1000; i++) { printf ("%llu\n", randint (state, 10)); } return EXIT_SUCCESS; } </code></pre>
2020-01-26 03:31:15.777000+00:00
2020-01-26 03:51:41.987000+00:00
2020-01-26 03:51:41.987000+00:00
null
11,758,809
<p>In this StackOverflow question:</p> <p><a href="https://stackoverflow.com/questions/5008804/generating-random-integer-from-a-range">Generating random integer from a range</a></p> <p>the accepted answer suggests the following formula for generating a random integer in between given <code>min</code> and <code>max</code>, with <code>min</code> and <code>max</code> being included into the range:</p> <pre><code>output = min + (rand() % (int)(max - min + 1)) </code></pre> <p>But it also says that</p> <blockquote> <p>This is still <em>slightly</em> biased towards lower numbers ... It's also possible to extend it so that it removes the bias.</p> </blockquote> <p>But it doesn't explain why it's biased towards lower numbers or how to remove the bias. So, the question is: is this the most optimal approach to generation of a random integer within a (signed) range while not relying on anything fancy, just <code>rand()</code> function, and in case if it is optimal, how to remove the bias?</p> <p><strong>EDIT:</strong></p> <p>I've just tested the <code>while</code>-loop algorithm suggested by @Joey against floating-point extrapolation:</p> <pre><code>static const double s_invRandMax = 1.0/((double)RAND_MAX + 1.0); return min + (int)(((double)(max + 1 - min))*rand()*s_invRandMax); </code></pre> <p>to see how much uniformly "balls" are "falling" into and are being distributed among a number of "buckets", one test for the floating-point extrapolation and another for the <code>while</code>-loop algorithm. But results turned out to be varying depending on the number of "balls" (and "buckets") so I couldn't easily pick a winner. The working code can be found at <a href="http://ideone.com/oPhqK" rel="noreferrer">this Ideone page</a>. For example, with 10 buckets and 100 balls the maximum deviation from the ideal probability among buckets is less for the floating-point extrapolation than for the <code>while</code>-loop algorithm (0.04 and 0.05 respectively) but with 1000 balls, the maximum deviation of the <code>while</code>-loop algorithm is lesser (0.024 and 0.011), and with 10000 balls, the floating-point extrapolation is again doing better (0.0034 and 0.0053), and so on without much of consistency. Thinking of the possibility that none of the algorithms consistently produces uniform distribution better than that of the other algorithm, makes me lean towards the floating-point extrapolation since it appears to perform faster than the <code>while</code>-loop algorithm. So is it fine to choose the floating-point extrapolation algorithm or my testings/conclusions are not completely correct?</p>
2012-08-01 12:05:04.280000+00:00
2022-02-07 00:54:17.093000+00:00
2017-05-23 12:09:39.290000+00:00
c++|c|random|uniform
['https://arxiv.org/abs/1805.10941', 'http://www.pcg-random.org/', 'https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J']
3
62,902,524
<p>You have touched on two points involving a random integer algorithm: Is it <em>optimal</em>, and is it <em>unbiased</em>?</p> <h3>Optimal</h3> <p>There are many ways to define an &quot;optimal&quot; algorithm. Here we look at &quot;optimal&quot; algorithms in terms of the number of random bits it uses on average. In this sense, <code>rand</code> is a poor method to use for randomly generated numbers, in part because it need not necessarily produce random bits (because <code>RAND_MAX</code> is not exactly specified)*. Instead, we will assume we have a &quot;true&quot; random generator that can produce unbiased and independent random bits.</p> <p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. (Knuth and Yao, &quot;The complexity of nonuniform random number generation&quot;, in <em>Algorithms and Complexity</em>, 1976.) They also gave bounds on the number of bits a given algorithm will need on average for this task. In this case, an <em>optimal</em> algorithm to generate integers in <code>[0, n)</code> uniformly, will need <strong>at least <code>log2(n)</code> and at most <code>log2(n) + 2</code> bits on average</strong>.</p> <p>There are many examples of <em>optimal</em> algorithms in this sense. One of them is the <a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Fast Dice Roller</a> by J. Lumbroso (2013) (implemented below), and perhaps another example is the algorithm given in the <a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">Math Forum</a> in 2004. On the other hand, all the algorithms <a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">surveyed by M. O'Neill</a> are not optimal, since they rely on generating blocks of random bits at a time. See also my note on <a href="https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N" rel="nofollow noreferrer">integer generating algorithms</a>.</p> <p>The following shows an implementation of the Fast Dice Roller; although it's in JavaScript, not in C or C++, it's easy to adapt to either language and the idea is to show that it's not complicated to generate integers from bits in an optimal way. In the code, <code>(Math.random() &lt; 0.5 ? 0 : 1)</code> is JavaScript's way to generate an unbiased random bit.</p> <pre><code>function randomInt(minInclusive, maxExclusive) { var maxInclusive = (maxExclusive - minInclusive) - 1 var x = 1 var y = 0 while(true) { x = x * 2 var randomBit = (Math.random() &lt; 0.5 ? 0 : 1) y = y * 2 + randomBit if(x &gt; maxInclusive) { if (y &lt;= maxInclusive) { return y + minInclusive } // Rejection x = x - maxInclusive - 1 y = y - maxInclusive - 1 } } } </code></pre> <h3>Unbiased</h3> <p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the <code>n</code> outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p> <ul> <li>Have an &quot;infinite&quot; depth, or</li> <li>include &quot;rejection&quot; leaves at the end of the tree,</li> </ul> <p>and in either case, the algorithm won't run in constant time and will run forever in the worst case. (On the other hand, when <code>n</code> is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses &quot;rejection&quot; events to ensure it's unbiased; see the comment in the code above.</p> <p>And for general <code>n</code>, there is no way to &quot;fix&quot; this worst case time complexity without introducing bias. For instance, modulo reductions (including the <code>min + (rand() % (int)(max - min + 1))</code> in your question) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p> <h3>Note</h3> <p>* There are <a href="https://stackoverflow.com/questions/52869166/why-is-the-use-of-rand-considered-bad/52881465#52881465">other problems with <code>rand()</code></a> as well. Perhaps the most serious here is the fact that the C standard does not specify a particular distribution for the numbers returned by <code>rand()</code>.</p>
2020-07-14 19:20:25.140000+00:00
2022-02-07 00:54:17.093000+00:00
2022-02-07 00:54:17.093000+00:00
null
11,758,809
<p>In this StackOverflow question:</p> <p><a href="https://stackoverflow.com/questions/5008804/generating-random-integer-from-a-range">Generating random integer from a range</a></p> <p>the accepted answer suggests the following formula for generating a random integer in between given <code>min</code> and <code>max</code>, with <code>min</code> and <code>max</code> being included into the range:</p> <pre><code>output = min + (rand() % (int)(max - min + 1)) </code></pre> <p>But it also says that</p> <blockquote> <p>This is still <em>slightly</em> biased towards lower numbers ... It's also possible to extend it so that it removes the bias.</p> </blockquote> <p>But it doesn't explain why it's biased towards lower numbers or how to remove the bias. So, the question is: is this the most optimal approach to generation of a random integer within a (signed) range while not relying on anything fancy, just <code>rand()</code> function, and in case if it is optimal, how to remove the bias?</p> <p><strong>EDIT:</strong></p> <p>I've just tested the <code>while</code>-loop algorithm suggested by @Joey against floating-point extrapolation:</p> <pre><code>static const double s_invRandMax = 1.0/((double)RAND_MAX + 1.0); return min + (int)(((double)(max + 1 - min))*rand()*s_invRandMax); </code></pre> <p>to see how much uniformly "balls" are "falling" into and are being distributed among a number of "buckets", one test for the floating-point extrapolation and another for the <code>while</code>-loop algorithm. But results turned out to be varying depending on the number of "balls" (and "buckets") so I couldn't easily pick a winner. The working code can be found at <a href="http://ideone.com/oPhqK" rel="noreferrer">this Ideone page</a>. For example, with 10 buckets and 100 balls the maximum deviation from the ideal probability among buckets is less for the floating-point extrapolation than for the <code>while</code>-loop algorithm (0.04 and 0.05 respectively) but with 1000 balls, the maximum deviation of the <code>while</code>-loop algorithm is lesser (0.024 and 0.011), and with 10000 balls, the floating-point extrapolation is again doing better (0.0034 and 0.0053), and so on without much of consistency. Thinking of the possibility that none of the algorithms consistently produces uniform distribution better than that of the other algorithm, makes me lean towards the floating-point extrapolation since it appears to perform faster than the <code>while</code>-loop algorithm. So is it fine to choose the floating-point extrapolation algorithm or my testings/conclusions are not completely correct?</p>
2012-08-01 12:05:04.280000+00:00
2022-02-07 00:54:17.093000+00:00
2017-05-23 12:09:39.290000+00:00
c++|c|random|uniform
['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://www.pcg-random.org/posts/bounded-rands.html', 'https://peteroupc.github.io/randomfunc.html#RNDINT_Random_Integers_in_0_N', 'https://stackoverflow.com/questions/52869166/why-is-the-use-of-rand-considered-bad/52881465#52881465']
5
13,716,015
<p>We wrote a research paper that seems relevant:</p> <p>Kamel Aouiche and Daniel Lemire, A Comparison of Five Probabilistic View-Size Estimation Techniques in OLAP, DOLAP 2007, pp. 17-24, 2007. <a href="http://arxiv.org/abs/cs.DB/0703058" rel="nofollow">http://arxiv.org/abs/cs.DB/0703058</a></p>
2012-12-05 03:53:38.810000+00:00
2012-12-05 03:53:38.810000+00:00
null
null
6,413,773
<p>Does anyone know a method to use to get a rough size of an OLAP cube based on a star schema data warehouse. Something based on the number of dimensions, the number of records in the dimension tables and the number of fact records and finally the number of aggregations or distinct records etc..</p> <p>The database I am looking at has a fact table of over 20 billion rows and a few dimension tables of 20 million, 70 million and 1.3 billion rows.</p> <p>Thanks Nicholas</p>
2011-06-20 15:36:03.487000+00:00
2018-10-04 03:35:26.760000+00:00
null
sql|database|database-design|olap|olap-cube
['http://arxiv.org/abs/cs.DB/0703058']
1
62,214,199
<p>I have finally figured out the problem.</p> <p>Batch normalization learns two parameters during training and uses them for inference. Thus it is necessary to change its behaviour using <code>eval()</code> to tell not to modify them any further.</p> <p>I then scrutinizingly checked the <a href="https://arxiv.org/pdf/1602.07868.pdf" rel="noreferrer">weight normalization</a> paper and found it to be 'inherently deterministic'. It simply decouples the original weight vectors as product of two quantities as shown below.</p> <pre><code>w = g . v </code></pre> <p>Obviously either you use LHS for computing output or RHS it does not matter. However by decoupling it into two vectors and passing them to optimizer and deleting the <code>w</code> parameter better training is achieved. For reasons refer the paper where things are nicely described.</p> <p>Thus it does not matter if weight normalization is removed or not during testing. To validate this I tried the following small code.</p> <pre><code>import torch import torch.nn as nn from torch.nn.utils import weight_norm as wn from torch.nn.utils import remove_weight_norm as wnr # define the model 'm' m = wn(nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=1, bias=True)) ip = torch.rand(1,1,5,5) target = torch.rand(1,1,5,5) l1 = torch.nn.L1Loss() optimizer = torch.optim.Adam(m.parameters()) # begin training for _ in range(5): out = m(ip) loss = l1(out,target) loss.backward() optimizer.step() with torch.no_grad(): m.eval() print('\no/p after training with wn: {}'.format(m(ip))) wnr(m) print('\no/p after training without wn: {}'.format(m(ip))) # begin testing m2 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3,padding=1, bias=True) m2.load_state_dict(m.state_dict()) with torch.no_grad(): m2.eval() out = m2(ip) print('\nOutput during testing and without weight_norm: {}'.format(out)) </code></pre> <p>And the output is below,</p> <pre><code>o/p after training with wn: tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307], [0.1846, 0.3931, 0.5713, 0.2909, 0.4026], [0.1716, 0.5971, 0.4297, 0.0845, 0.6172], [0.2938, 0.2389, 0.4478, 0.5828, 0.6276], [0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]]) o/p after training without wn: tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307], [0.1846, 0.3931, 0.5713, 0.2909, 0.4026], [0.1716, 0.5971, 0.4297, 0.0845, 0.6172], [0.2938, 0.2389, 0.4478, 0.5828, 0.6276], [0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]]) Output during testing and without weight_norm: tensor([[[[0.0509, 0.3286, 0.4612, 0.1795, 0.0307], [0.1846, 0.3931, 0.5713, 0.2909, 0.4026], [0.1716, 0.5971, 0.4297, 0.0845, 0.6172], [0.2938, 0.2389, 0.4478, 0.5828, 0.6276], [0.1423, 0.2065, 0.5024, 0.3979, 0.3127]]]]) </code></pre> <p>Please see that all the values are exactly same as only reparameterization is happening.</p> <p>Regarding,</p> <blockquote> <p>Then I tested two models using C++ code with libtorch. But the results are not the same.</p> </blockquote> <p>See <a href="https://github.com/pytorch/pytorch/issues/21275" rel="noreferrer">https://github.com/pytorch/pytorch/issues/21275</a> which reports a bug with TorchScript.</p> <p>And regarding,</p> <blockquote> <p>I am wondering what does weight_norm do in inference? Is it usefull?</p> </blockquote> <p>The answer is it does nothing. you do <code>x * 2</code> or <code>x * (1+1)</code> does not matter. It is not useful but not harmful either. So better remove it.</p>
2020-06-05 11:23:00.240000+00:00
2020-06-05 11:23:00.240000+00:00
null
null
62,188,472
<p>An important weight normalization technique was introduced in <a href="https://arxiv.org/pdf/1602.07868.pdf" rel="nofollow noreferrer">this paper</a> and has been included in PyTorch since long as follows:</p> <pre><code> from torch.nn.utils import weight_norm weight_norm(nn.Conv2d(in_channles, out_channels)) </code></pre> <p>From the <a href="https://pytorch.org/docs/1.3.1/nn.html#torch.nn.utils.weight_norm" rel="nofollow noreferrer">docs</a> I get to know, <code>weight_norm</code> does re-parametrization before each <code>forward()</code> pass. But I am not sure if this re-parameterization is also happening during the inference when everything is running inside <code>with torch.no_grad()</code> and the model is set to <code>eval()</code> mode.</p> <p>Can someone please help me know if <code>weight_norm</code> is active only during training or during the inference mode as described above?</p> <p>Thank you</p>
2020-06-04 06:48:00.197000+00:00
2020-06-05 11:23:00.240000+00:00
null
python|pytorch
['https://arxiv.org/pdf/1602.07868.pdf', 'https://github.com/pytorch/pytorch/issues/21275']
2
56,909,944
<p>I just want to add that from the algorithm point of view (i.e. the cost only considers the number of comparisons and swaps), 2-pivot quicksort and 3-pivot quicksort is not better than classical quicksort (which uses 1 pivot), if not worse. However, they are faster in practice since they take the benefits of modern computer architecture. Specifically, their numbers of cache misses are smaller. So if we remove all caches and there are only CPU and main memory, in my understanding, 2/3-pivot quicksort is worse than classical quicksort.</p> <p>References: 3-pivot Quicksort: <a href="https://epubs.siam.org/doi/pdf/10.1137/1.9781611973198.6" rel="noreferrer">https://epubs.siam.org/doi/pdf/10.1137/1.9781611973198.6</a> Analysis of why they perform better than classical Quicksort: <a href="https://arxiv.org/pdf/1412.0193v1.pdf" rel="noreferrer">https://arxiv.org/pdf/1412.0193v1.pdf</a> A complete and not-too-much-details reference: <a href="https://algs4.cs.princeton.edu/lectures/23Quicksort.pdf" rel="noreferrer">https://algs4.cs.princeton.edu/lectures/23Quicksort.pdf</a></p>
2019-07-05 22:51:40.537000+00:00
2019-07-05 22:51:40.537000+00:00
null
null
20,917,617
<p>I've never seen dual pivot quick sort before. Is it an upgraded edition of quick sort?<br> And what is the difference between dual pivot quick sort and quick sort?</p>
2014-01-04 06:06:11.243000+00:00
2021-08-06 07:33:47.170000+00:00
2021-01-05 02:46:31.033000+00:00
java|sorting|quicksort
['https://epubs.siam.org/doi/pdf/10.1137/1.9781611973198.6', 'https://arxiv.org/pdf/1412.0193v1.pdf', 'https://algs4.cs.princeton.edu/lectures/23Quicksort.pdf']
3
55,768,063
<p>You can instead use the GoogLeNet <code>inception_v3</code> model (<a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">"Rethinking the Inception Architecture for Computer Vision"</a>):</p> <pre><code>import torchvision google_net = torchvision.models.inception_v3(pretrained=True) </code></pre>
2019-04-19 21:29:08.027000+00:00
2019-04-19 21:29:08.027000+00:00
null
null
55,762,706
<p>I'm trying to finetune a GoogleNet network over a specific dataset but I'm having trouble loading it. What I try now is:</p> <pre><code>model = torchvision.models.googlenet(pretrained=True) </code></pre> <p>However I get an error: </p> <pre><code>AttributeError: module 'torchvision.models' has no attribute 'googlenet' </code></pre> <p>I have the latest version of torchvision but reinstalled just to be sure, the error is still there.</p>
2019-04-19 13:34:03.537000+00:00
2019-04-19 21:29:28.337000+00:00
2019-04-19 21:29:28.337000+00:00
python|conv-neural-network|pytorch|pre-trained-model|torchvision
['https://arxiv.org/abs/1512.00567']
1
43,188,318
<p>Alex Rogozhnikov keeps track of a few of datasets that you can use for learning to rank, check <a href="http://arogozhnikov.github.io/2015/06/26/learning-to-rank-software-datasets.html" rel="noreferrer">his blog post</a></p> <p>You can also use the <a href="http://www.arnetminer.org/citation" rel="noreferrer">DBLP dataset</a>, which was also used in a Learning to Rank task, check this paper: <a href="https://arxiv.org/pdf/1501.05132.pdf" rel="noreferrer">https://arxiv.org/pdf/1501.05132.pdf</a></p>
2017-04-03 15:14:35.487000+00:00
2017-04-03 15:14:35.487000+00:00
null
null
43,142,311
<p>Recently I started working on a learning to rank algorithm which involves feature extraction as well as ranking. Famous learning to rank algorithm data-sets that I found on Microsoft research website had the datasets with query id and Features extracted from the documents. Can someone suggest me a good learning to rank Dataset which would have query-document pairs in their original form with good relevance judgment ??. </p>
2017-03-31 13:45:49.640000+00:00
2017-04-03 15:14:35.487000+00:00
null
machine-learning|information-retrieval|information-extraction
['http://arogozhnikov.github.io/2015/06/26/learning-to-rank-software-datasets.html', 'http://www.arnetminer.org/citation', 'https://arxiv.org/pdf/1501.05132.pdf']
3
55,438,598
<p>Gradient computation occurs inside <code>optimizer.minimize</code> function, so, no explicit use inside loss function is needed. However, your implementation simply lacks an optimizable, trainable variable. </p> <pre><code>iou = get_iou(masks, predictions) mean_iou_loss = tf.Variable(initial_value=-tf.log(tf.reduce_sum(iou)), name='loss', trainable=True) train_op = tf.train.AdamOptimizer(0.001).minimize(mean_iou_loss) </code></pre> <p>Numerical stability, differentiability and particular implementation aside, this should be enough to use it as a loss function, which will change with iterations.</p> <p>Also take a look:</p> <p><a href="https://arxiv.org/pdf/1902.09630.pdf" rel="noreferrer">https://arxiv.org/pdf/1902.09630.pdf</a></p> <p><a href="https://stackoverflow.com/questions/40475246/why-does-one-not-use-iou-for-training">Why does one not use IOU for training?</a></p>
2019-03-31 07:02:41.233000+00:00
2019-03-31 07:02:41.233000+00:00
null
null
55,425,811
<p>This may be more of a Tensorflow gradient question. I have been attempting to implement Intersection over Union (IoU) as losses and have been running into some problems. To the point, here is the snippet of my code that computes the IoU:</p> <pre><code>def get_iou(masks, predictions): ious = [] for i in range(batch_size): mask = masks[i] pred = predictions[i] masks_sum = tf.reduce_sum(mask) predictions_sum = tf.reduce_mean(pred) intersection = tf.reduce_sum(tf.multiply(mask, pred)) union = masks_sum + predictions_sum - intersection iou = intersection / union ious.append(iou) return ious iou = get_iou(masks, predictions) mean_iou_loss = -tf.log(tf.reduce_sum(iou)) train_op = tf.train.AdamOptimizer(0.001).minimize(mean_iou_loss) </code></pre> <p>It works as predicted. However, the issue that I am having is the losses do not decrease. The model does train, though the results are less than ideal so I am wondering if I am implementing it correctly. Do I have to compute the gradients myself? I can compute the gradients for this IoU loss derived by <a href="https://arxiv.org/pdf/1608.01471.pdf" rel="noreferrer">this paper</a> using <code>tf.gradients()</code>, though I am not sure how to incorporate that with the <code>tf.train.AdamOptimizer()</code>. Reading the documentation, I feel like <code>compute_gradients</code> and <code>apply_gradients</code> are the commands that I need to use, but I can't find any examples on how to use them. My understanding is that the Tensorflow graph should be able to come up with the gradient itself via chain rule. So is a custom gradient even necessary in this problem? If the custom gradient is not necessary then I may just have an ill-posed problem and need to adjust some hyperparameters.</p> <p><strong>Note:</strong> I have tried Tensorflow's implementation of the IoU, <code>tf.metrics.mean_iou()</code>, but it spits out <code>inf</code> every time so I have abandoned that.</p>
2019-03-29 21:41:52.670000+00:00
2019-04-08 20:49:31.333000+00:00
2019-04-08 20:49:31.333000+00:00
python|tensorflow|conv-neural-network|object-detection|bounding-box
['https://arxiv.org/pdf/1902.09630.pdf', 'https://stackoverflow.com/questions/40475246/why-does-one-not-use-iou-for-training']
2
63,236,164
<p>It seems like the <em>memory access granularity</em> is a more broad term and it can be applied to any kind of memory. Therefore, the <em>cache line size</em> is simply a granularity of the on-chip caches[<a href="https://arxiv.org/pdf/1605.06483.pdf" rel="nofollow noreferrer">1</a>].</p> <p>Quote from the link:</p> <blockquote> <p>In most modern systems, the memory subsystem is managed and accessed at multiple different granularities at various resources. The software stack typically accesses data at a word granularity (typically 4 or 8 bytes). The on-chip caches store data at a cache line granularity (typically 64 bytes).</p> </blockquote>
2020-08-03 19:58:16.610000+00:00
2020-08-03 19:58:16.610000+00:00
null
null
63,002,249
<p>I'm trying to note the concept of <em>memory access granularity</em>, which I've found mentioned in some articles.</p> <p>It's being said that <em>memory access granularity</em> is [<a href="https://developer.ibm.com/technologies/systems/articles/pa-dalign/" rel="nofollow noreferrer">1</a>]:</p> <blockquote> <p>the size in which a processor accesses memory</p> </blockquote> <p>On the other hand, the <em>cache line</em> is [<a href="https://medium.com/software-design/why-software-developers-should-care-about-cpu-caches-8da04355bb8a#:%7E:text=A%20cache%20line%20is%20the,region%20is%20read%20or%20written." rel="nofollow noreferrer">2</a>]:</p> <blockquote> <p>the unit of data transfer between cache and memory</p> </blockquote> <ul> <li>How does the <em>size</em> of a cache line relate to the <em>granularity</em> of the memory?</li> <li>Do they mean the same thing?</li> </ul> <p>Thanks!</p>
2020-07-20 19:06:31.427000+00:00
2020-08-03 19:58:16.610000+00:00
null
memory|cpu-cache|memory-access|granularity
['https://arxiv.org/pdf/1605.06483.pdf']
1
37,811,143
<p>In face recognition, the standard way to handle millions of classes is by using an <strong>embedding</strong>. The CNN produces an embedding of size between 64 and 1024.</p> <p>In this embedding space, each class of images should form a cluster of images, and clusters of different classes should be far apart.</p> <hr> <p>The approach of Facebook is described in their <a href="https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/" rel="noreferrer">DeepFace paper</a> (June 2014), but I would recommend a more recent approach from Google using <strong>triplet loss</strong>: <a href="http://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet: A Unified Embedding for Face Recognition and Clustering</a>.</p> <p><a href="https://i.stack.imgur.com/FM7LG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FM7LG.png" alt="triplet loss"></a></p>
2016-06-14 11:47:36.143000+00:00
2016-06-14 11:47:36.143000+00:00
null
null
37,810,412
<p>I am puzzled with a question and would like your opinions. I am working on a convolution Neural network in tensorflow. Now I have images with tags. There are around 10000 unique tags and I would like images to be automatically tagged. Now I use one hot encoding for labels. For 10000 unique tags it will be like feature erruption. Howcan we deal with such situations?</p> <p>How do facebook do it in face tagging? There are millions of faces. I guess they do not do one hot encoding for face tag right?</p>
2016-06-14 11:14:38.430000+00:00
2016-06-14 11:47:36.143000+00:00
null
tensorflow|deep-learning|one-hot-encoding
['https://research.facebook.com/publications/deepface-closing-the-gap-to-human-level-performance-in-face-verification/', 'http://arxiv.org/abs/1503.03832', 'https://i.stack.imgur.com/FM7LG.png']
3
69,635,165
<p>From the link you posted, the literal purpose of this function is to: &quot;Purpose: F evaluates the function&quot;.</p> <pre><code>/******************************************************************************/ double f ( double x ) /******************************************************************************/ /* Purpose: F evaluates the function. */ { double pi; double value; pi = 3.141592653589793; value = 50.0 / ( pi * ( 2500.0 * x * x + 1.0 ) ); return value; } </code></pre> <p>IMO This is an example of a really weak comment block descriptions :)</p> <p>It appears that it could be used to provide a sample payload ( <a href="https://arxiv.org/pdf/1505.07734.pdf" rel="nofollow noreferrer">as described here</a> ) with a known set of attributes (i.e. memory usage, run-time duration, etc) as part of an approach to benchmark or in some other way verify the functionality of your parallel processing design. i.e. a payload that is distributed, does something and returns a response from several locations in your distributed network of uPs</p>
2021-10-19 17:14:34.500000+00:00
2021-10-19 19:24:13.380000+00:00
2021-10-19 19:24:13.380000+00:00
null
69,633,056
<pre><code>double f ( double x ) { double pi; double value; pi = 3.141592653589793; value = 50.0 / ( pi * ( 2500.0 * x * x + 1.0 ) ); return value; } </code></pre> <p>This part of code quad_mpi.c I don't know what the value is. I thought it was a formula for finding pi, but it already has pi. I'm trying to read all quad_mpi.c but it's so hard for me. <a href="https://people.sc.fsu.edu/%7Ejburkardt/c_src/quad_mpi/quad_mpi.c" rel="nofollow noreferrer">quad_mpi.c</a></p>
2021-10-19 14:49:22.173000+00:00
2021-10-19 19:24:13.380000+00:00
null
c|math|mpi
['https://arxiv.org/pdf/1505.07734.pdf']
1
71,473,663
<p>This kind of broad-advice question – and about a very-tough problem, paraphrasing text, that is still a very active research problem – would be better answered by surveyin the research literature.</p> <p>A great site for searching relevant papers – and then finding other related papers once you've set some positive examples – is <a href="http://www.arxiv-sanity.com/" rel="nofollow noreferrer">http://www.arxiv-sanity.com/</a>.</p> <p>Searching for [paraphrasing] or [summarization] would give you a running start in seeing major techniques &amp; their limitations. And, once you start bookmarking papers by the little 'disk' icon, it can autosuggest important related papers... so even if your 1st few finds are tangential or far-from-usefulness, it can lead you to the seminal papers, &amp; prevailing cutting-edge algorithms/libraries, pretty quickly.</p>
2022-03-14 20:08:18.767000+00:00
2022-03-14 20:08:18.767000+00:00
null
null
71,439,976
<p>I am doing a project at the university and I need to train an algorithm to rephrase sentences, what can you advise for implementation? Is it possible to use a translator to translate into another language in the end to get a paraphrased sentence? Also i want to use Word2Vec, or it's a bad idea?</p>
2022-03-11 14:24:17.103000+00:00
2022-03-14 20:08:18.767000+00:00
null
machine-learning|text|word2vec|linguistics
['http://www.arxiv-sanity.com/']
1
30,279,949
<p>Hadoop doesn't really do what you want it to do. There might be a way to define your own <a href="https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/InputFormat.html" rel="nofollow">InputFormat</a> (and probably override some other classes as well) to force it to do what you want, but I can't really recommend that.</p> <p>The <a href="http://en.wikipedia.org/wiki/Map_%28parallel_pattern%29" rel="nofollow">map()</a> part of <a href="http://en.wikipedia.org/wiki/MapReduce" rel="nofollow">MapReduce</a> fundamentally relies on being able to decompose a problem "into independent subtasks, requiring no communication/synchronization between the subtasks". If your problem has input data that's a single record that can grow arbitrarily large, and cannot be broken up, MapReduce is fundamentally the wrong conceptual approach because you're not doing any decomposition.</p> <p>The way I'd think this could be decomposed (if you're speaking of the normal evolutionary algorithm) is to break it up by record (individual, in this case), and your file would be a collection of records. You could then split the file up by record. Depending on your file format, you can create an <code>InputFormat</code>, if necessary, so that it knows how to split the file up. Potentially this will result in rather large records, so you may want to tune your block size to be around the average size of your records, for better distribution.</p> <p>It looked like others did their generations either as separate jobs or in the reducer, and not in the mapper as you are proposing. You might read these papers on the topic.</p> <ul> <li><a href="http://www0.cs.ucl.ac.uk/staff/F.Sarro/resource/papers/C2.pdf" rel="nofollow">A Parallel Genetic Algorithm Based on Hadoop MapReduce for the Automatic Generation of JUnit Test Suite - Linda Di Geronimo, Filomena Ferrucci, Alfonso Murolo, Federica Sarro</a></li> <li><a href="http://www.researchgate.net/profile/Abdulhamit_Subasi/publication/258858471_Parallelization_of_genetic_algorithms_using_Hadoop_MapReduce/links/0046352945f2358517000000.pdf" rel="nofollow">Parallelization of genetic algorithms using Hadoop Map/Reduce - Dino Kečo, Abdulhamit Subasi</a></li> <li><a href="http://arxiv.org/pdf/1312.0086.pdf" rel="nofollow">A Framework for Genetic Algorithms Based on Hadoop - Filomena Ferrucci, M-Tahar Kechadi, Pasquale Salza, Federica Sarro </a></li> <li><a href="http://ieeexplore.ieee.org/xpl/login.jsp?tp=&amp;arnumber=5362925" rel="nofollow">Scaling Genetic Algorithms Using MapReduce - Verma, A.; Llora, X.; Goldberg, D.E.; Campbell, R.H.</a></li> </ul> <p>Alternatively, you could use an existing framework. <a href="https://www.safaribooksonline.com/library/view/apache-mahout-cookbook/9781849518024/ch10.html" rel="nofollow">Apache Mahout Cookbook, Chapter 10</a> describes how the <a href="http://watchmaker.uncommons.org/" rel="nofollow">Watchmaker Framework</a> can be used in <a href="http://mahout.apache.org/" rel="nofollow">Mahout</a> (Hadoop's machine learning library) for evolutionary computation.</p> <p>You may also find that <a href="http://spark.apache.org/" rel="nofollow">Spark</a> better suits your needs since it has better iterative computation since it keeps more in memory. There's even native support for evolution algorithms <a href="https://issues.apache.org/jira/browse/SPARK-3830" rel="nofollow">being built</a> for their machine learning library (<a href="http://spark.apache.org/mllib/" rel="nofollow">MLlib</a>).</p> <p>I hope this doesn't ruin your thesis.</p>
2015-05-16 19:36:20.643000+00:00
2015-05-16 19:52:07.427000+00:00
2015-05-16 19:52:07.427000+00:00
null
30,276,775
<p>I am new in hadoop (v 2.6.0) I work in my thesis with genetic algorithm in hadoop (Linux). My problem :</p> <p>1:I want to duplicate the file input (text) in hdfs location for all the slaves (not partitioned the file) for example I have a file (200 Mb) I want to sent all the file to the slaves.(200 for slave 1 and 200 for slave 2 ...etc) is that possible?if that is possible what is the keys to doing that?</p> <p>2: second question:I have 2 slaves and one master ... when I start is my program by default execute in all slaves? or hadoop decide what slave will execute the program?if hadoop decide that how i make My program executable in all slaves without exception?i hope that is possible. because when my program launched I see that it is executed just in slave 2 (not the slave 1.)</p> <h3>Edit 1 with text from his comment-answer</h3> <p>thank you for this detaills my data cannot grow arbitrarily large: if i inderstend you right ...well if i have in my data 200 individuals ...it stile 200 indivual with this algorithme...</p> <p>inside the algorithme if i specific 30 chromosomes .well the algorithme will excute in every node with 200 individuals (in the data base in input file) and with 30 chromosom inside the execution...this parameters will specifie by me before starting my code. they are not a parameter that will be grow arbitrarily large in my algorithme.</p> <p>can you give me more detaill about InputFormat to start my algorithme)</p> <h3>Edit 2 with text from his second comment-answer</h3> <p><a href="http://arxiv.org/pdf/1312.0086.pdf" rel="nofollow">In this document</a> you can find in c:related work in this part said: <code>The existent literature proposes some parallel version of GAs using the MapReduce paradigm. The first is an extension, by adding a second Reducer, of MapReduce named “MRPGA” [6] based on .Net. In this implementation a coordinator client manages the executions of the parallel GA iterations. The chosen model is the island model in which each participating node computes GAs operations for a portion of the entire population. In the first phase, each Mapper node receives its own portion of population and computes the fitness value for each of its individuals. The Reducer nodes of the first reduce phase receive the individuals of the correspondent island and apply the selection function. The final Reducer computes the global selection and the other following GAs functions.</code></p> <p>That only detaills in this documentabout this approach. the portion of population here is mean the number of chromosoms.(a group of chromosoms named population if you decide de work with 2000 chromosoms and 5 slaves then just specifie 400 chromosoms in the code and every slave will work with just 400 (400*5 =2000)...that s my point.. because if you specifie a 2000 for one node this is very much and take a huge timefor the fitness.you inderstood?the real data that i will partionne it is the chromosoms not the data of input file ,and i want to use a huge number of chromosoms because when you use a big number of chromosoms you will get the approximative solution that u need.</p>
2015-05-16 14:19:42.173000+00:00
2015-05-16 22:27:19.427000+00:00
2015-05-16 22:27:19.427000+00:00
file|hadoop|mapreduce|hdfs|replication
['https://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapreduce/InputFormat.html', 'http://en.wikipedia.org/wiki/Map_%28parallel_pattern%29', 'http://en.wikipedia.org/wiki/MapReduce', 'http://www0.cs.ucl.ac.uk/staff/F.Sarro/resource/papers/C2.pdf', 'http://www.researchgate.net/profile/Abdulhamit_Subasi/publication/258858471_Parallelization_of_genetic_algorithms_using_Hadoop_MapReduce/links/0046352945f2358517000000.pdf', 'http://arxiv.org/pdf/1312.0086.pdf', 'http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5362925', 'https://www.safaribooksonline.com/library/view/apache-mahout-cookbook/9781849518024/ch10.html', 'http://watchmaker.uncommons.org/', 'http://mahout.apache.org/', 'http://spark.apache.org/', 'https://issues.apache.org/jira/browse/SPARK-3830', 'http://spark.apache.org/mllib/']
13
40,465,742
<ol> <li>Accuracy is always 1 - see <a href="https://stackoverflow.com/a/39720541/1714410">this answer</a>. </li> <li><code>"EuclideanLoss"</code> layer is a good fit for regression. </li> <li>Subtracting the mean should help the net converge better. Keep using it. You can read more about the importance of data normalization and what can be done in that respect <a href="https://arxiv.org/abs/1502.01852" rel="nofollow noreferrer">here</a>.</li> </ol>
2016-11-07 12:54:59.617000+00:00
2016-11-07 12:54:59.617000+00:00
2017-05-23 12:13:33.373000+00:00
null
40,462,524
<p>I have a fully convotuional network for depth estimation like this: (only upper and lower layers for the sake of simplicity):</p> <pre><code># input: image and depth_image layer { name: "train-data" type: "Data" top: "data" top: "silence_1" include { phase: TRAIN } transform_param { #mean_file: "mean_train.binaryproto" scale: 0.00390625 } data_param { source: "/train_lmdb" batch_size: 4 backend: LMDB } } layer { name: "train-depth" type: "Data" top: "depth" top: "silence_2" include { phase: TRAIN } transform_param { scale: 0.00390625 } data_param { source: "train_depth_lmdb" batch_size: 4 backend: LMDB } } layer { name: "val-data" type: "Data" top: "data" top: "silence_1" include { phase: TEST } transform_param { #mean_file: "mean_val.binaryproto" scale: 0.00390625 } data_param { source: "val_lmdb" batch_size: 4 backend: LMDB } } layer { name: "val-depth" type: "Data" top: "depth" top: "silence_2" include { phase: TEST } transform_param { scale: 0.00390625 } data_param { source: "val_depth_lmdb" batch_size: 4 backend: LMDB } } ################## Silence unused labels ################## layer { name: "silence_layer_1" type: "Silence" bottom: "silence_1" } layer { name: "silence_layer_2" type: "Silence" bottom: "silence_2" } .... layer { name: "conv" type: "Convolution" bottom: "concat" top: "conv" convolution_param { num_output: 1 kernel_size: 5 pad: 2 stride: 1 engine: CUDNN weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu" type: "ReLU" bottom: "conv" top: "result" relu_param{ negative_slope: 0.01 engine: CUDNN } } # Error layer { name: "accuracy" type: "Accuracy" bottom: "result" bottom: "depth" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "EuclideanLoss" bottom: "result" bottom: "depth" top: "loss" } </code></pre> <p>Now I have 3 questions:</p> <p>When I am training the network the accuracy layer is always 1. I do not understand why?</p> <p>Is EuclideanLayer the correct layer for this purpose?</p> <p>Is the mean needed in such a case or can I neglect the mean?</p> <pre><code>#Define image transformers transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_mean('data', mean_array) transformer.set_transpose('data', (2,0,1)) image = "test.png" img = caffe.io.load_image(image, False) img = caffe.io.resize_image( img, (IMAGE_WIDTH, IMAGE_HEIGHT)) net.blobs['data'].data[...] = transformer.preprocess('data', img) pred = net.forward() output_blob = pred['result'] </code></pre>
2016-11-07 10:08:44.430000+00:00
2016-11-07 13:24:10.603000+00:00
2016-11-07 13:24:10.603000+00:00
machine-learning|neural-network|deep-learning|caffe|conv-neural-network
['https://stackoverflow.com/a/39720541/1714410', 'https://arxiv.org/abs/1502.01852']
2
44,097,423
<p>FMI/Model-exchange is targeted at the distribution of models (systems of differential algebraic equations), whereas FMI/Co-Simulation targets the distribution of models along with an appropriate solver.</p> <p>Due to the many challenges in coding solvers with an appropriate support of rollback, it is hard to come by exported FMUs that can be used in a strongly coupled co-simulation.</p> <p>So, to answer your question: it depends on the scenario. If you wish to simulate a strongly coupled physical system using FMI/Co-simulation, and you wish to do so with multiple FMUs, it better be that these support rollback, to avoid stability issues. If you have, for example, a scenario where one FMU simulates the physical system, and another FMU simulates a controller, then you may do well with a loose coupling approach.</p> <p>It is hard to pinpoint exactly how strongly coupled two FMUs need to be before you need to apply a stabilization technique. Have a look at the following experiment, which compares a strong coupling master with a loose coupling one. Both master are used for the co-simulation of a strongly coupled mechanical system: <a href="https://github.com/into-cps/case-study_mass-springer-damper" rel="nofollow noreferrer">https://github.com/into-cps/case-study_mass-springer-damper</a></p> <p>Also, see the following report (disclosure: I contributed to it :) ) for an introduction to these concepts: <a href="https://arxiv.org/pdf/1702.00686v1" rel="nofollow noreferrer">https://arxiv.org/pdf/1702.00686v1</a></p>
2017-05-21 13:24:07.783000+00:00
2017-05-21 13:24:07.783000+00:00
null
null
43,784,038
<p>I am new to the topic of co-simulation. I am familiar with the definitions (based on Trcka "COMPARISON OF CO-SIMULATIONAPPROACHES FOR BUILDING ANDHVAC/R SYSTEM SIMULATION "):</p> <ul> <li>Quasi-dynamic coupling, also called loose coupling, orping-pongcoupling, where distributed models run in sequence, and one model uses the known output values, based on the values at the previous time steps, of the coupled model.</li> <li>Fully-dynamic coupling, also called strong coupling, oronion coupling, where distributed models iterate withineach time step until the error estimate falls within a predefined tolerance.</li> </ul> <p>My question: Is FMI/co-simulation a loose coupling method? What is FMI/model-exchange? From my understanding, it is not a strong coupling method. Am I understanding it correct that in model-exchange, the tool that imports the FMU is collecting all ODE and algebraic equations and the tool solve the entire system with a single solver. So it is more a standard to describe models in a unified way so that they can be integrated in different simulation environments?</p> <p>Thank you very much for your help</p>
2017-05-04 13:18:53.727000+00:00
2017-05-21 13:24:07.783000+00:00
null
simulation|fmi
['https://github.com/into-cps/case-study_mass-springer-damper', 'https://arxiv.org/pdf/1702.00686v1']
2
4,617,053
<p>There's a paper titled <a href="http://arxiv.org/abs/0707.1532v1" rel="noreferrer" title="Sorting and Selection in Posets">Sorting and Selection in Posets</a> available on arxiv.org which discusses sorting methods of order O((w^2)nlog(n/w)), where w is the "width" of the poset. I haven't read the paper, but it seems like it covers what you are looking for.</p>
2011-01-06 16:17:47.360000+00:00
2011-01-06 16:17:47.360000+00:00
null
null
4,600,258
<p>There are a huge number of sorting algorithms out there, but most of them only work on totally-ordered sets because they assume that any two elements are comparable. However, are there any good algorithms out there for sorting posets, where some elements are uncomparable? That is, given a set S of elements drawn from a poset, what is the best way to output an ordering x<sub>1</sub>, x<sub>2</sub>, ..., x<sub>n</sub> such that if x<sub>i</sub> &le; x<sub>j</sub>, i &le; j?</p>
2011-01-05 02:14:54.447000+00:00
2015-02-09 23:40:47.453000+00:00
null
language-agnostic|sorting|poset
['http://arxiv.org/abs/0707.1532v1']
1
16,652,476
<p>And there are now other approaches than modularity, designed to overcome the limitations mentionned by job, such as <a href="http://en.wikipedia.org/wiki/Surprise_%28networks%29" rel="nofollow">surprise</a>; or the B- and <a href="http://arxiv.org/abs/0907.3708" rel="nofollow">C-scores</a> (designed to be significance indices).</p>
2013-05-20 15:05:06.397000+00:00
2013-05-20 15:05:06.397000+00:00
null
null
6,759,538
<p>this is my first question on Stack Overflow. This is not really a programming question but since most of us have to deal with theoretical problems at some point and there might be some graph theory specialists around, I thought I might give it a go.</p> <p>I am currently doing some research on multilingual websites and I found some interesting patterns in the website structure. The graphs below are the website graphs of two different multilingual websites. Sorry, I don't have enough rep points to post images so I leave them as links. I used the Force Atlas algorithm for the layout. Vertices are colored according to the page language. The shaded areas correspond to the subgraphs of a specific language.</p> <p>Here is the graph of the website where different language versions of the same content are very closely linked. Hence the planes representing the different language versions are overlapping.</p> <p><a href="http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/tight.png" rel="nofollow">http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/tight.png</a></p> <p>In this second graph, we have a website where language versions of a website are almost independent, thus we have almost no overlap.</p> <p><a href="http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/loose.png" rel="nofollow">http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/loose.png</a></p> <p>So here is my question:</p> <p><strong>Is there a specific metric to quantify this overlap? If so, what is it named?</strong></p> <p>Since I used a force-based layout, the number of edges between the language subgraphs. So I guess something like taking the ratio of the number of edges within the subgraph to the number edges going outside/coming inside a specific subgraph might do the trick. I am sure I am not the first to get this idea so I was wondering if this metric had a name. I could then Google it from there :)</p> <p>Thank you in advance!</p>
2011-07-20 09:13:38.123000+00:00
2013-05-20 15:05:06.397000+00:00
null
graph|graph-theory|discrete-mathematics
['http://en.wikipedia.org/wiki/Surprise_%28networks%29', 'http://arxiv.org/abs/0907.3708']
2
6,763,228
<p>It sounds like what you're looking for is <a href="http://en.wikipedia.org/wiki/Modularity_%28networks%29" rel="nofollow">Network Modularity</a>. Given a graph, and a partition (breaking the graph into disjoint subgraphs), the modularity is defined as: </p> <blockquote> <p>The fraction of the edges that fall within the given groups minus the expected such fraction if edges were distributed at random.</p> </blockquote> <p>Modularity was the basis of some of the first <a href="http://en.wikipedia.org/wiki/Community_structure" rel="nofollow">community detection</a> algorithms on networks, which try to find sets of nodes that are densely connected. Recently, modularity has been shown to be a poor metric for community detection though because of resolution limits that fail to detect small groups or break apart well defined groups in certain cases (see <a href="http://arxiv.org/abs/1107.1155" rel="nofollow">this paper</a>).</p>
2011-07-20 14:04:20.747000+00:00
2011-07-20 14:04:20.747000+00:00
null
null
6,759,538
<p>this is my first question on Stack Overflow. This is not really a programming question but since most of us have to deal with theoretical problems at some point and there might be some graph theory specialists around, I thought I might give it a go.</p> <p>I am currently doing some research on multilingual websites and I found some interesting patterns in the website structure. The graphs below are the website graphs of two different multilingual websites. Sorry, I don't have enough rep points to post images so I leave them as links. I used the Force Atlas algorithm for the layout. Vertices are colored according to the page language. The shaded areas correspond to the subgraphs of a specific language.</p> <p>Here is the graph of the website where different language versions of the same content are very closely linked. Hence the planes representing the different language versions are overlapping.</p> <p><a href="http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/tight.png" rel="nofollow">http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/tight.png</a></p> <p>In this second graph, we have a website where language versions of a website are almost independent, thus we have almost no overlap.</p> <p><a href="http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/loose.png" rel="nofollow">http://www.ai.soc.i.kyoto-u.ac.jp/~julien/phd/images/loose.png</a></p> <p>So here is my question:</p> <p><strong>Is there a specific metric to quantify this overlap? If so, what is it named?</strong></p> <p>Since I used a force-based layout, the number of edges between the language subgraphs. So I guess something like taking the ratio of the number of edges within the subgraph to the number edges going outside/coming inside a specific subgraph might do the trick. I am sure I am not the first to get this idea so I was wondering if this metric had a name. I could then Google it from there :)</p> <p>Thank you in advance!</p>
2011-07-20 09:13:38.123000+00:00
2013-05-20 15:05:06.397000+00:00
null
graph|graph-theory|discrete-mathematics
['http://en.wikipedia.org/wiki/Modularity_%28networks%29', 'http://en.wikipedia.org/wiki/Community_structure', 'http://arxiv.org/abs/1107.1155']
3
37,066,615
<p>A possible way to go about this is to assign those topics to the sentences in each section [1]. As it seems that you have annotated data, you can train a "sentence topic/section model" with that. According to [1], even a multinomial naïve Bayes classifier does the job already pretty well.</p> <p>As to the summarization aspect, unless you have training data, I would look into <em>extractive</em> summarization techniques [2] - that is, selecting the best sentences from the existing ones for the summary. The work of [2], LexRank, has a few implementations in the wild you can use. If you have summaries to learn from, you can look into <em>abstractive</em> techniques that generate new sentences from the existing ones [3], too. If you check [4], [3] has some sample implementations floating around.</p> <p>[1] <a href="http://bioinformatics.oxfordjournals.org/content/25/23/3174.full" rel="nofollow">http://bioinformatics.oxfordjournals.org/content/25/23/3174.full</a></p> <p>[2] <a href="http://jair.org/papers/paper1523.html" rel="nofollow">http://jair.org/papers/paper1523.html</a></p> <p>[3] <a href="http://arxiv.org/abs/1509.00685" rel="nofollow">http://arxiv.org/abs/1509.00685</a></p> <p>[4] <a href="http://gitxiv.com/" rel="nofollow">http://gitxiv.com/</a> </p>
2016-05-06 07:22:53.190000+00:00
2016-05-06 07:35:19.467000+00:00
2016-05-06 07:35:19.467000+00:00
null
37,041,935
<p>I'd like to do the following given a document:</p> <ul> <li>create a summary using pre-existing topics </li> </ul> <p>In the first scenario, documents are neatly organized in a uniform way. For example, most Wikipedia movie articles have the following subtopics (ex: <a href="https://en.wikipedia.org/wiki/Between_Us_(2012_film)" rel="nofollow">https://en.wikipedia.org/wiki/Between_Us_(2012_film)</a>)</p> <ul> <li>Plot</li> <li>Cast</li> <li>Reception</li> <li>other optional topics</li> </ul> <p>In the second scenario, documents contain the same info as above; however, documents do NOT have clean organization. Documents may use the same or similar language but organized differently. </p> <p>In both cases, given the subtopics, I'd like to extract this info from a document.</p> <p>Are there any machine learning/natural language processing strategies/algorithms that I can use? Combination of algorithms is fine. Algorithms that mostly work are also fine.</p> <p>Update: It looks like what I want is <em>Information Extraction</em>.</p>
2016-05-05 03:41:23.833000+00:00
2016-05-06 07:35:19.467000+00:00
2016-05-05 04:28:22.140000+00:00
algorithm|machine-learning|nlp|artificial-intelligence|information-extraction
['http://bioinformatics.oxfordjournals.org/content/25/23/3174.full', 'http://jair.org/papers/paper1523.html', 'http://arxiv.org/abs/1509.00685', 'http://gitxiv.com/']
4
31,859,670
<p>A natural question here is to ask: for any given k, what is the maximum number of monotonic paths avoiding a set S of k points (where the maximum is over all possible sets S)? </p> <p>This is actually an open problem raised in a paper of Johnson, Leader and Russell: <a href="http://arxiv.org/pdf/1309.4643.pdf" rel="nofollow">http://arxiv.org/pdf/1309.4643.pdf</a></p>
2015-08-06 15:20:58.687000+00:00
2015-08-06 15:20:58.687000+00:00
null
null
12,278,288
<p>In a rectangular grid of size m*n, the number of paths from (0,0) to (m,n) (without backtracking) is (m+n)!/(m!*n!). Now if there are certain points in the grid which we want to avoid, how can we calculate the number of paths avoiding those points?</p>
2012-09-05 09:25:45.387000+00:00
2015-08-06 15:20:58.687000+00:00
2012-09-05 09:40:17.337000+00:00
algorithm
['http://arxiv.org/pdf/1309.4643.pdf']
1
39,320,149
<p>For C, please check out Annex G in C99 or C11. At least GCC follows this, I would be surprised if clang didn't.</p> <p>For C++, IIRC the C++ standard has chosen to not incorporate C99/C11 Annex G, and the algorithms for complex mult/div is up to the implementation.</p> <p>The Fortran standard does not specify how complex multiplication or division must be implemented. For division, GFortran uses the common Smith (1962) method, except when -ffast-math is specified, then the naive algorithm is used.</p> <p>For a comparison of different algorithms for computing complex division, please see <a href="http://arxiv.org/abs/1210.4539" rel="nofollow">http://arxiv.org/abs/1210.4539</a></p>
2016-09-04 18:23:51.087000+00:00
2016-09-04 18:28:52.773000+00:00
2016-09-04 18:28:52.773000+00:00
null
39,318,880
<p>In real floating point arithmetic we have the additional symbols INF (infinity), NAN and the signed zero. For complex arithmetic this is more difficult. If one uses the "naive" rules for multiplication and division </p> <pre><code>(a + ib)(c + id) = (ac - db) + i(ac+bd) (a + ib)/(c + id) = ( (ac + db) + i(ac-bd) ) / (c*c + d*d) </code></pre> <p>one gets wrong (*) results for almost all cases where where one variable of a,b,c,d is INF or NAN.</p> <p>For example </p> <ul> <li>(1 + i0)*(INF + i0) = INF + iNAN . As compared to real arithmetic 1*INF = INF</li> <li>(0 + i1)* (NAN + i0) = NAN + iNAN. However one would expect i*NAN = (0+iNAN)</li> <li>1 / (0+0i) = NAN + iNAN. This breaks for example z = 1/(1/z), which works perfectly in real arithmetic. </li> </ul> <p>This list could go on easily.</p> <p>The question is, how to correctly implement the complex division and multiplication so that all cases, including when one of real or imaginary part is INF and NAN, give meaningful results? Also are there programming languages which guarantee correct behavior for complex arithmetic with INF and NAN?</p> <p>EDIT: I would like to know which programming language standard (version) does require correct complex arithmetic with INF and NAN. The languages I would be most interested are the C, C++ and FORTRAN families.</p> <p>(*) wrong in the sense that it is mathematically not meaningful, or is counter-intuitive in the sense of IEEE-754.</p>
2016-09-04 16:05:00.530000+00:00
2016-09-04 18:28:52.773000+00:00
2016-09-04 16:36:36.097000+00:00
math|floating-point|ieee-754|complex-numbers
['http://arxiv.org/abs/1210.4539']
1
42,532,658
<h2>How can I fight overfitting?</h2> <ul> <li>Get more data (or data augmentation)</li> <li>Dropout (see <a href="https://arxiv.org/abs/1207.0580" rel="noreferrer">paper</a>, <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="noreferrer">explanation</a>, <a href="https://datascience.stackexchange.com/q/16045/8820">dropout for cnns</a>)</li> <li>DropConnect</li> <li>Regularization (see <a href="https://arxiv.org/pdf/1707.09725.pdf#page=134" rel="noreferrer">my masters thesis</a>, page 85 for examples)</li> <li>Feature scale clipping</li> <li>Global average pooling</li> <li>Make network smaller</li> <li>Early stopping</li> </ul> <h2>How can I improve my CNN?</h2> <blockquote> <p>Thoma, Martin. "<a href="https://arxiv.org/pdf/1707.09725.pdf" rel="noreferrer">Analysis and Optimization of Convolutional Neural Network Architectures</a>." arXiv preprint arXiv:1707.09725 (2017).</p> </blockquote> <p>See chapter 2.5 for analysis techniques. As written in the beginning of that chapter, you can usually do the following:</p> <ul> <li>(I1) Change the problem definition (e.g., the classes which are to be distinguished)</li> <li>(I2) Get more training data</li> <li>(I3) Clean the training data</li> <li>(I4) Change the preprocessing (see Appendix B.1)</li> <li>(I5) Augment the training data set (see Appendix B.2)</li> <li>(I6) Change the training setup (see Appendices B.3 to B.5)</li> <li>(I7) Change the model (see Appendices B.6 and B.7)</li> </ul> <h2>Misc</h2> <blockquote> <p>The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting.</p> </blockquote> <p>I don't understand how this is connected. You can have hundreds of labels without a problem of overfitting.</p>
2017-03-01 13:08:45.890000+00:00
2018-04-17 08:37:37.410000+00:00
2018-04-17 08:37:37.410000+00:00
null
36,139,980
<p>I'm using TensorFlow to train a Convolutional Neural Network (CNN) for a sign language application. The CNN has to classify 27 different labels, so unsurprisingly, a major problem has been addressing overfitting. I've taken several steps to accomplish this:</p> <ol> <li>I've collected a large amount of high-quality training data (over 5000 samples per label).</li> <li>I've built a reasonably sophisticated pre-processing stage to help maximize invariance to things like lighting conditions.</li> <li>I'm using dropout on the fully-connected layers.</li> <li>I'm applying L2 regularization to the fully-connected parameters.</li> <li>I've done extensive hyper-parameter optimization (to the extent possible given HW and time limitations) to identify the simplest model that can achieve close to 0% loss on training data.</li> </ol> <p>Unfortunately, even after all these steps, I'm finding that I can't achieve much better that about 3% test error. (It's not terrible, but for the application to be viable, I'll need to improve that substantially.)</p> <p>I suspect that the source of the overfitting lies in the convolutional layers since I'm not taking any explicit steps there to regularize (besides keeping the layers as small as possible). But based on examples provided with TensorFlow, it doesn't appear that regularization or dropout is typically applied to convolutional layers.</p> <p>The only approach I've found online that explicitly deals with prevention of overfitting in convolutional layers is a fairly new approach called <a href="http://www.matthewzeiler.com/pubs/iclr2013/iclr2013.pdf" rel="noreferrer">Stochastic Pooling</a>. Unfortunately, it appears that there is no implementation for this in TensorFlow, at least not yet.</p> <p>So in short, is there a recommended approach to prevent overfitting in convolutional layers that can be achieved in TensorFlow? Or will it be necessary to create a custom pooling operator to support the Stochastic Pooling approach?</p> <p>Thanks for any guidance!</p>
2016-03-21 19:33:25.577000+00:00
2018-04-17 08:37:37.410000+00:00
null
tensorflow|conv-neural-network
['https://arxiv.org/abs/1207.0580', 'https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf', 'https://datascience.stackexchange.com/q/16045/8820', 'https://arxiv.org/pdf/1707.09725.pdf#page=134', 'https://arxiv.org/pdf/1707.09725.pdf']
5
55,106,542
<p>Some modes of the "Paragraph Vector" algorithm (aka <code>Doc2Vec</code> in libraries like Python <code>gensim</code>) will train both doc-vectors and word-vectors into the a shared coordinate space. (Specifically, any of the PV-DM <code>dm=1</code> modes, or the PV-DBOW mode <code>dm=0</code> if you enable the non-default interleaved word-vector training using <code>dbow_words=1</code>.)</p> <p>In such a case, you <strong>can</strong> compare <code>Doc2Vec</code> doc-vectors with the co-trained word-vectors, with some utility. You can see some examples in the followup paper form the originators of the "Paragraph Vector" algorithm, "<a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">Document Embedding with Paragraph Vectors</a>". </p> <p>However, beware that vectors for single words, having been trained in contexts of use, may not have vectors that match what we'd expect of those same words when intended as overarching categories. For example, <code>education</code> as used in many sentences wouldn't necessarily assume all facets/breadth that you might expect from <code>Education</code> as a category-header. </p> <p>Such single word-vectors might work better than nothing, and perhaps help serve as a bootstrapping tool. But, it'd be better if you had expert-labelled examples of documents belonging to categories of interest. Then you you could also use more advanced classification algorithms, sensitive to categories that wouldn't necessarily be summarized-by (and in a tight sphere around) any single vector point. In real domains-of-interest, that'd likely do better than using single-word-vectors as category-anchors. </p> <p>For any other non-<code>Doc2Vec</code> method of vectorizing a text, you could conceivably get a comparable vector for a single word by supplying a single-word text to the method. (Even in a <code>Doc2Vec</code> mode that doesn't create word-vectors, like pure PV-DBOW, you could use that model's out-of-training-text inference capability to infer a doc-vector for a single-word doc, for known words.)</p> <p>But again, such simplified/degenerate single-word outputs might not well match the more general/textured categories you're seeking. The models are more typically used for larger contexts, and narrowing their output to a single word might reflect the peculiarities of that unnatural input case moreso than the usual import of the word in real context. </p>
2019-03-11 16:41:34.360000+00:00
2019-03-11 16:41:34.360000+00:00
null
null
55,090,042
<p>So, I have to compare vector of article and vector of single word. And I don't have any idea how to do it. Looks like that BERT and Doc2wec good work with long text, Word2vec works with single words. But how to compare long text with just a word?</p>
2019-03-10 16:48:45.123000+00:00
2019-03-11 16:41:34.360000+00:00
null
vector|nlp|word2vec|doc2vec
['https://arxiv.org/abs/1507.07998']
1