101
BULGARIAN ACADEMY OF SCIENCES
CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 19, No 1
Sofia 2019 Print ISSN: 1311-9702; Online ISSN: 1314-4081
DOI: 10.2478/cait-2019-0006
Hybrid Recommender System via Personalized Users’ Context
Anthony Nosshi
1
, Aziza Asem
2
, Mohamed Badr Senousy
3
1
Information System Dept., Computers and Information Faculty, Mansoura University, Mansoura, Egypt
2
Information System Dept., Computers and Information Faculty, Mansoura University, Mansoura, Egypt
3
Computer and Information System Department, Sadat Academy for Management Sciences, Cairo, Egypt
Abstract: In movie domain, finding the appropriate movie to watch is a challenging
task. This paper proposes a recommender system that suggests movies in cinema that
fit the user’s available time, location, mood and emotions. Conducted experiments
for evaluation showed that the proposed method outperforms the other baselines.
Keywords: Movie recommender, Emotion recommendation, Hybrid recommender
system, sentiment analysis, spatio-temporal recommendation.
1. Introduction
Recommender systems help users overcome the information overload problem. They
can be classified into three main categories: Content-Based (CB), Collaborative
Filtering (CF) and Hybrid technique. In CB technique, the system builds a user profile
based on his/her preferences, then it finds the best fits to this profile [1]. It is helpful
in discovering different users’ interests [2]. In CF, the system uses other users’ ratings
to items to understand what the user would like or dislike [3]. The CF can give
recommendations to the user that is different from what the user had seen before [4].
Despite the benefits provided by these methods, they still suffer from many
issues. For example, CF suffers from sparsity problem, which causes the
recommender system to give poor recommendations [5]. Moreover, the CF also
suffers from the cold start problem [6]. CB methods also suffer from problems such
as the “over-specialization” problem. This problem means the system recommends
similar items to what the user has seen [7].
Today’s websites and social media enable users to leave their feedback and
opinions to be read publicly [8]. Consequently, this enables researchers to extract the
users’ interests, mood or contextual information from the social media posts they
share publicly. Therefore, this information can be incorporated into a hybrid
recommender system to produce a better recommendation [9]. Using a hybrid
recommender system which applies both aforementioned recommendation
techniques allows incorporating additional sources of information and content to
produce better recommendations. Therefore, studying additional information about
102
the user helps in customizing the recommendation to every user according to his/her
needs [10]. Thus, information such as the user’s mood during the day, the time of
recommendation, and the location of the user can be very helpful in enhancing the
recommendation process [11].
This paper proposes a hybrid recommender system that filters the
recommendation list using three phases filtering approach. The recommender is
applied to the movies in the cinema’s domain. In the first phase, the system
determines the current user mood to find the most suitable recommendation. Then,
the system in the second phase filters the produced list according to the time and
location of the user to recommend him/her the nearest cinema that has the preferred
movie (Spatio-Temporal Factors). Finally, the system filters the produced list by
determining the emotional factors that match with the user’s emotions which he/she
got from watching another movie previously, regardless of its genre. The remainder
of this paper is organized as follows: Section 2 discusses the state of the art, Section
3 discusses the proposed recommender, Section 4 discusses the results and evaluation
and finally, Section 5 discusses the conclusion and future work.
2. State of the art
Many studies have been conducted by researchers to enhance the recommendation
results by incorporating the user’s produced contents, sentiments, locations, and other
features as follows:
Q. Y a n g [12] proposed a recommender system that integrated both the domain
semantics along with the context information. For that purpose, the author developed
an improved content-based model to incorporate them. Though this work took into
account only four types of emotions. Additionally, it did not take care of the
emotional impression that a user may be willing to have as a result of watching a
movie. In addition to ignoring the current user’s mood, user’s location, and user’s
time availability. D i x i t and J a i n [13] proposed a recommender system that took
into account the contextual information of three different categories in order to
prepare their data selection and construction. However, this work depended only on
three main categories: demographic, semantic and social context and ignored the
emotional effect of watching movies. In addition, it missed the spatio-temporal
factors and the current user mood. L u i s et al. [14] proposed a system based on
semantic web technology. The system considered the location, time and the crowd of
people at the place of interest. Though, this work missed taking into consideration
the user’s mood. They depend on their work on location and time and the place crowd
only. A b b a s et al. [15] proposed a hybrid movie recommender system that took
into consideration the different users’ interests and then provided a recommendation
based on the context the users were in. Additionally, they applied their approach to
four different contexts and then compared how the system performed with these
different four contexts. C a i and Gu [16] proposed a recommender system using a
tensor factorization method. Though, their work missed the current user mood, the
spatio-temporal factors, and the emotional effect of the movies. Kim et al. [17]
proposed a recommender system that integrated convolutional neural network into
103
probabilistic matrix factorization. Their approach acquired the contextual
information from the document by applying the convolutional neural network to
enhance the accuracy of rating prediction. Though, they ignored the spatio-temporal
effects on the recommendation, the user mood, and the emotional factors as well. Y u,
L i n and W a n g [18] proposed a recommendation framework to alleviate the
sparsity problem. To this end, they built a contextual profile for each contextual
condition using a co-clustering algorithm. Additionally, they used the expanded
preferences in their recommendation system. Though, they ignored the emotional
effects on movies and matching the personal user’s emotions with the movie
emotions. Additionally, they missed the user mood. D e n g et al. [19] proposed a
recommender system that took into account the user’s emotions in different
granularity levels and different time windows. In comparison to the proposed work,
their work took into consideration only the emotional factors and time. They did not
take in consideration the spatial factors. Z h o u et al. [20] proposed a recommender
system that analyzed the movie-poster image and the text description for movies, user
ratings, and social relationship. Then, they utilized a random-walk methodology for
presenting the recommendation. Though, they ignored the spatio-temporal factors.
In summary, the aforementioned studies missed considering in their
recommendation process some or all of the following features: the current user mood,
the location and time of the recommendation, and the emotional factors in the time
domain.
3. The proposed recommender
The proposed algorithm consists of the three main phases, the first phase is
responsible for determining the user’s mood, the second is responsible for
determining the spatio-temporal factors and the final phase is responsible for the
emotional constraints as illustrated in Fig. 1.
Fig. 1. The proposed recommender overview
Start
Spatio-Temporal
Constraints (Lk)
Mood Determination (Lm)
End
Emotional Constraints
104
3.1. The first phase: Mood determination
The first step in the proposed recommender is to extract the user’s mode. In a nutshell,
this step consists of two subsystems as illustrated in Fig. 2. The first is (Data
Preparation) subsystem, the second is the (Mood Fitting) subsystem.
3.1.1. The first subsystem: Data preparation
The goal of this step is to extract the users’ emotions from both: The social media
posts and the shared movie posts that the user shared in a microblog publicly
(Fig. 2). Twitter was used for that purpose to extract users’ tweets. Then, some
associations between the users, emotions, and movies were constructed and
represented in a tuple of three elements (User, Emotion, Movie) as will come in more
details in the following paragraphs.
Fig. 2. Mood determinations subsystems
The idea behind the approach applied in this step is that there are associations
between the user posts on the social media and the movies they are watching [19].
The emotions expressed in the social media, just before the time that the user
expressed him/herself as watching a movie, can reflect to a great extent the mood the
user was in to take a decision of watching a specific movie. Applying the same
concept represented in [19], the social media posts can be classified into two types:
movie related posts, and general posts. The general posts are those, which users
usually express everything they want to, such as news, information, personal
opinions, etc. While movie posts are those, which the users share to show that, they
are watching a movie in their favorite cinema. Table 1 shows the difference between
the two types of social media posts. Both the first and the second posts belong to the
general post type, while the third belongs to the movie related post.
extracts
extracts
General Posts
Movies Posts
Users’
Social
Media
Movies
Total
Associations
(User, Mood,
Movie)
Target
User
Social
Media
General
Posts
Current
Mood
extracts
Association (User, Mood,
Movie)
Collaborative Filtering
List (Lm) of Movies fits
the user’s mood
A; Data Preparation
Subsystem
B: Mood Fitting
Subsystem
105
Table 1. Microblog example
USER ID
Post content
Time
Movie
876543467
Would like to thank all my friends for the birthday
wishes &…
2016.06.15
13:25:39
876543467
Feeling Excited with my friends
2016.06.15
13:29:12
876543467
Watching “Victoria”. Best birthday ever with my
lovely friends @Village East Cinema, 181-189 2nd
Ave, New York, NY 10003
2016.06.15
13:55:02
Victoria
The analysis of the above tweets can infer that the user before he/she has been
to the cinema was in a happy mood. He/she was celebrating with their friends by
watching a movie called “Victoria”. Then, by applying the aforementioned
hypothesis, it can be inferred that her decision to watch the movie with his/her friends
was according to the correlation between his/her happy mood and the movie they
watched. Thus, it was an emotional dependent decision. As illustrated in (Fig. 2), on
the first step in data preparation, Twitter posts (tweets) were extracted for people
located in the United States using Twitter Enterprise Data
(https://developer.twitter.com/en/enterprise). This extraction retrieved
60,345,567 users’ data with an average of 720 tweets per user. Then, users who
tweeted less than five movies or cinemas’ tweets were filtered out, also movies or
cinemas which were tweeted less than five times were removed. The dataset now
consists of a total of 12,927,849 tweets. Then we used SAS Viya
(https://www.sas.com/en_us/software/viya.html) to extract the sentiments
contained in those tweets as it has its own dictionary. Afterwards, all tweet posts were
segmented into words that can be used to construct the Bag Of Words model (BOW)
[21]. Then, based on that model, the number of emotional words were counted for
each emotional class [22] (Granularities of 2d, 7d, and 21d). According to the number
of emotional words, the emotional vector of the text was determined. Therefore, each
microblog post can be represented into emotional vectors. For example, the first
microblog in Table 1 has three emotion vectors: 2D-emotion vector, 7D-emotion
vector, and 21D-emotion vector, the values of which are (3, 0), (3, 0, 0, 0, 0, 0, 0),
and (3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0). Then, to discover the
emotions and moods related to the movies’ posts, only the general social media posts
before posting about movies were taken into consideration. For that purpose, the
users’ latest social media posts were retrieved in the specified time window (last hour
in this case). Then, the sum of all qualified social media posts’ emotional vectors was
calculated. After that, associations were formed between the emotions, movies, and
all users. Then, the associations were represented by a three-element tuple (User,
Mood, Movie). Example (876,543,467, ((4,0), (3,1,0,0,0,0,0),
(3,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)), Victoria)
3.1.2. The second subsystem: Mood Fitting
This subsystem (Fig. 2, Mood Fitting) consists of three steps, the first step is for
extracting the current user’s associations, the second is for calculating the similarity,
and the third ‒ for finding out the current user’s interest.
106
Step 1. The current user’s associations: All the target user’s posts (both
general and movie related posts) were extracted. Then, the user’s current mood and
the historical associations were also calculated by the same method discussed in the
previous subsystem.
Step 2. Calculating the similarity: After that, the similarity between the
current user and the other users were computed depending on the associations they
have by incorporating it into collaborative filtering as presented in [19]. Any two
users were considered similar if they watched the same movie before. The similarity
degree increased with the more movies being common between the two users.
Additionally, if two users watched the same movie and they also shared the same
mood, they have more similarity than any other two users who watched the same
movie, but under different moods. For example, if three users A, B and C watched
the same movie and assuming that A and C watched it in a happy mood, and B
watched the same movie in an angry mood, then A and C have higher similarity
together than B. Then, the users were sorted according to their similarity to the target
user and we got the top-k user list.
Step 3. The current user’s interest: Afterwards, the interest of the current user
to all movies of the top-k users was calculated as presented in [19] as follows: When
the top-k similar users watch a movie in the same current mood of the target user, it
is more likely that the user will have a higher interest in that movie. Then, all movies
were sorted according to the target user’s interest to them and added to a list (denoted
here as Lm) to be the input of the next step for spatio-temporal constraints phase.
3.2. The second phase: Spatio-temporal constraints
This phase receives the (Lm) list produced from the first phase (Mood Determination)
and then applies some processes to produce a filtered movie list that fits with spatio-
temporal constraints of the target user. It consists of three main components: User
Interface, User Profiler Engine, and Spatio-Temporal engine. It is responsible for
determining the spatio-temporal factors. They work as follows.
User Interface. It is responsible for interacting with both the user and
the recommendation engine as well. It is built with Telegram Messenger
(Telegram Bot) (https://core.telegram.org/bots). It is also responsible
for determining the time and the location of the user using the Google Location
API (https://developers.google.com/location-context/fused-location-provider/)
(to retrieve the geographical coordinates), and Geocoding API
(https://developers.google.com/maps/documentation/geocoding/start) (for
reversing geocoding to get a human-readable address). Then, it sends them to the
User Profiler Engine to be used in the recommendation process. The time can be
either the current time that the user requested the recommendation at, or a time that
the user specified according to his/her wish.
The User Profile Engine. Its job is to build a personal profile for the target
user in the system. The profile contains the target user’s mood extracted from the first
phase, and the retrieved movie list (Lm list), in addition to the time and location
retrieved from the user interface.
107
Time Engine. This engine is responsible for filtering out the movie list
(Lm) to get the movies names that are running in cinemas in the user’s
preferred time. For that purpose, it first retrieves the user profile, which contains
the user’s preferred time. Then, it applies a programming interface that sends
HTTP requests to a movie’s API called the International Showtime API
(https://api.internationalshowtimes.com) to retrieve a list of movies that are in
cinemas at the user’s specified time range (date and hour). It also retrieves a list of
the cinemas displaying those movies with their details like the address, street, and
city. Afterwards, it filters the (Lm) list to pick up the movies and the cinemas in the
specified time range, i.e., it discards all the movies that are not in cinemas at the time
specified by the user. After filtering the (Lm) list, it creates a new filtered list for
movies in a suitable time (denoted here as Lt).
Location Engine. It retrieves the lastly created list (Lt) that contains the
movies within the user’s preferred time. Then, it uses Google Distance Matrix API
(https://developers.google.com/maps/documentation/distance-matrix/start) to
retrieve the distance and time needed to reach to the cinemas that display those
movies. Therefore, it calculates the distance and the required time for the user to
move from the current location (the origin) and the destination cinemas, taking into
account the user’s mobility method (example: driving, walking, cycling, etc.).
Finally, it creates a new filtered list denoted here as (Lk). This list now contains the
movies that match the user mood, within her preferable time and in near cinemas to
his location. This list of movies (Lk) will be the input of the next third and final step
in the recommendation process, the EFBM step.
3.3. Exploring the emotional factors with emotional fingerprint model
Now, the prepared movies’ list (Lk) is passed to the third phase of the proposed
recommender in order to filter the movies for finding which movies can give the user
the desired emotions. For that purpose, the Emotional Fingerprints Based Model
(EFBM) proposed in [23] was applied. The EFBM model groups movies by
emotional patterns of some key factors that change across time, which were extracted
from the movies’ reviews. Those factors are what people discussed in reviews,
showing people’s most popular everyday problems, ideas, objects, and philosophical
concepts. These patterns in time form a kind of fingerprints that enable the user to
choose a movie that can give him/her the same emotional experience he/she got
before from watching and liking another movie in the past, regardless of its genre. To
apply the EFBM model, the IMDB (https://www.imdb.com/) users’ reviews were
first extracted for all movies which were included inside the (Lk) list. Then, these
customer reviews were analyzed to discover the movies’ features and make groups
for those movies depending on the features. Afterwards, the algorithm
was applied in Fig. 3.
108
Fig. 3. Algorithm used in the EFBM model [23]
In a nutshell, in the first step of the algorithm, two of small models were created
for the data dimension reduction and then aggregated later. After that, the genres were
combined with a technique of dimension expanding. The complementary weight was
used to modify the Jaccard distance, in order to calculate the value of the information
in a genres field. It was calculated as follows [23]:
(1) w
wgh
ij
= –log(w
i
) * –log(w
j
),
where w
wgh
ij
is the complementary weight; w
j
is the complementary weight of movie
by genre [23], and
(2) w
i
=








,
where N is the movie count by all genres. Then, topics were extracted using SAS
Enterprise Miner (https://www.sas.com/en_us/home.html) to discover what every
cluster is about. After that, the features taxonomy was created manually. Then, the
sentiment analysis was applied to each cluster for visualization purposes.
Fig. 4. Statistics of sample clusters after applying sentiment analysis
109
Fig. 4 shows the statistics of sample clusters after sentiment analysis was applied
(A for cluster_id=100, B for cluster_id=143).
Then, SAX (http://www.cs.ucr.edu/~eamonn/SAX.htm) approach was
applied to discover the pattern similarity. SAS was customized with a window of 25
days, intersection of 20% and size of five alphabets. Then, by applying the distance
[23]
(3) min([(vec_cos(i, j)) for i, j in b])
a distance matrix was applied for all features to all polarities. Finally, the neutrals
were dropped and formed pair by pair by getting the maximum (max(a, b)) to every
cell within the matrix. Example of the resulted grouping is given in Fig. 5A, for spy
stories – Fig. 5B for War, and Fig. 5C ‒ for Criminal.
Fig. 5. Example of movies grouping based on the emotional fingerprints
4. Results and discussion
To evaluate the proposed approach, the IMDB dataset, which was used in phase three,
was split into a training set and a test set using the 10-fold cross validation[24]. Then,
the following standard metrics were applied: Precision, Recall, and F-Measure.
In more details, according to [25] Precision measures the ability of the
recommender system to retrieve as many relevant items as possible per request. It can
be calculated as follows:
(4) 


.
According to [26], a recall measures the system ability to retrieve fewer non-
relevant items as possible. It can be calculated as follows:
(5) 


.
According to [27], the F-Measures is a harmonic mean of both the precision and
recall. It can be calculated as follows:
(6) 


.
Additionally, the experiment was executed on four different user modes,
extracted from the social media at the first phase to understand how the recommender
is performing with different users’ moods. When applying the formulas (4), (5), and
(6) over the four different moods, the results were in Table 2.
Table 2. Evaluation metrics for the four moods
Metric/Mood
Anger
Joy
Sadness
Surprise
Precision
0.89
0.99
0.84
0.97
Recall
0.85
0.96
0.77
0.91
F-Measure (F1)
0.87
0.97
0.80
0.94
110
Table 2 shows the three metrics results with the different four moods. The results
show that the proposed model results varied according to the mood the user had. The
model achieved a higher precision with the following emotions in the following
sequence (Joy, Surprise, Anger, Sadness). Fig. 6 also depicts the different metrics
values for the different four moods as well.
Fig. 6. Average proposed system metrics
As it can be inferred from the figure above, the model achieved different results
according to the users’ moods. The proposed model achieved better precision results
with the Joy mood, then Surprise mood, Anger mood, and lastly Sadness mood.
To evaluate the performance of the proposed recommender, three baselines were
used to compare the proposed work with them. The first baseline is by comparing the
results of the proposed work with the results of the collaborative filtering technique
only. The second baseline is by comparing the proposed work results with the work
in [23] which depends on the EFBM only and ignores the mood and location
constraints. Finally, the third baseline is by comparing the proposed work results with
the work of [15]. In the last work, they applied the experiment where N=20.
Therefore, in order to be able to compare the proposed work with them, we applied
all the metrics where N=20 for all the baselines as well. After calculating the
aforementioned metrics on the proposed work at N=20, we have calculated the
average result for those metrics to be able to compare with the other baseline.
Table 3 shows the results of the N=20 of the four different moods before calculating
the average.
Table 3. Evaluation metrics for the four moods at N=20
Metric/Mood
Anger
Joy
Sadness
Surprise
Precision
0.91
0.97
0.8
0.94
Recall
0.86
0.91
0.78
0.89
F-Measure (F1)
0.88
0.94
0.79
0.91
Table 3 shows how the results of the three metrics applied at N=20 vary
according to the user’s mood as well. The model had a higher precision result with
0
0.2
0.4
0.6
0.8
1
1.2
Anger Joy Sadness Surprise
Average Metrics of the Proposed
System
Precision Recall F1
111
the Joy mood, then Surprise mood, Anger mood and finally the Sadness mood.
Fig. 7 also depicts the different metrics values for the different four moods as well.
Fig. 7. Evaluation metrics for the four moods at N=20
In addition, in work [15] they used the precision metric for evaluation only,
therefore we used this metric only in our comparison with the other baselines. Also,
they applied the precision metric in four different contexts. Therefore, we calculated
their precision average and we also calculated the precision average for our different
four moods where N=20. Table 4 shows the results of all the baselines where N=20.
Table 4. Comparison with Baselines at N=20
Method
Precision
Proposed Recommender
0.92
First Baseline
0.63
Second Baseline
0.87
Third Baseline
0.85
In Table 4, the precision value of the proposed recommender was 0.92. While
it was for the other three baselines as follows: the first baseline (Traditional CF)=63,
the second baseline (EFBM) = 0.87, while it was for the third baseline (work of
[15])=0.85. This reflects that the proposed recommender outperformed the other
baselines in the precision metric. Fig. 8 also shows a graphical representation of the
results.
Fig. 8. Precision metric comparison with baselines
0
0.2
0.4
0.6
0.8
1
1.2
Anger Joy Sadness Surprise
Metrics @N=20
Precision Recall F1
0.00
0.50
1.00
Proposed
Recommender
Frist Baseline Second
Baseline
Third Baseline
Precision Metric Comparison With Baselines
112
Fig. 8 depicts the results of comparing the results of the proposed recommender
with the other baselines. It reflects the superiority of the proposed system against the
other baselines.
The experimental results show that using a hybrid recommendation system that
takes in consideration the users’ moods, location, available time and the emotional
features in time, can improve the performance of the recommendation process, which
leads to better results and better user satisfaction. The comparison with the results of
other baselines shows that the proposed work outperformed the other baselines. This
is due to a better understanding of the user’s mood, the user’s availability, and
emotions, which in turn reflects the results of the recommendation, and the user’s
satisfaction. The average precision for the proposed recommender was 0.92, while it
achieved lower results in other baselines as follows: first baseline = 0.63, second
baseline = 0.87, and the third baseline = 0.85. For a deeper understanding of the
model, the proposed work was also experimentally tested with four different users
moods, which was (Joy, Sadness, Surprise, and Anger). The experimental results
showed that the model best works with the following moods: (Joy, Surprise, Anger,
and Sadness) respectively. The results also, to some extent, reflect the nature of
human feelings. People can share and post about what they feel in their everyday life
especially when they are happy, as it was illustrated on the aforementioned example
of Table 1. Whereas when people feel sad, they are usually less expressive about their
feelings in social media. This can be due to psychological reasons or simply they are
not in a good mood to use the social media that some people still consider it as a
means of entertainment.
5. Conclusion and future work
The main objective of this paper is creating a hybrid recommender system that takes
into consideration many factors such as the user mood, the location, the user’s
availability, and the user’s emotions as well. The proposed recommender consists of
three phases. Each phase produces a list of movies that are filtered with other criteria
on the next phase. The first phase specifies the user’s mood, the next phase specifies
the spatio-temporal factors, and on the last phase, the emotional experience is
determined by applying the emotional fingerprint-based model.
Experimentations were done to understand the recommendation quality and
behavior. The results of the experiment were evaluated using three metrics of
evaluation: precision, recall, and F-Measure. In order to understand the quality of the
recommendation, the results were compared with three baselines. The first baseline
by comparing the results of the proposed model with the results of applying
collaborative filtering only. The second baseline is to compare the performance of
the upgraded system with the system without upgrade that presented in the previous
chapter of this thesis. The third baseline is to compare the experiment with another
hybrid recommender system. The comparison showed that the proposed work
outperformed all baselines, which reflects the importance of incorporating the user’s
mood, spatio-temporal aspects, and emotional fingerprint into the recommendation
system.
113
For future work, more user’s context should be taken in consideration such as
the weather, seasonality, user’s age, gender, job, educational level and more personal
details such as marital status, and even her parental status. Additionally, the model
should explore what happens if a user was impressed by a specific movie or a feature.
In other words, is it possible that the user can dislike a movie while he/she still loves
some features, which by the end lead him/her to watch that movie?
Further, adding the emotional factors opens many questions such as: is it still
possible that the user changes his/her mind to watch a different movie other than the
suggested by the recommender as a result of advertising? Is a feature preferred by a
user enough to let him/her watch that movie? Is there any effect for the mass
“euphoria” based on a specific topic to lead some people to watch a specific movie
or alternate their preferable movie to another one? Example, if some people prefer
the movie “Only Lovers Left Alive”, which has a strong “life and Death” topic, can
people look for some other “life and death” movie such as “Bicentennial Man”? Or
just to go by tags “movies by Jim Jarmusch” or “movie with Vampire” as it can be
seen on Fig. 9?
Fig. 9. IMDB proposal for some movies
114
R e f e r e n c e s
1. C h a m p i r i, Z. D., S. R. S h a h a m i r i, S. S. B. S a l i m. A Systematic Review of Scholar
Context-Aware Recommender Systems. Expert Syst. Appl., Vol. 42, 2015, No 3,
pp. 1743-1758.
2. Shu, J., X. S h e n, H. Liu, B. Y i, Z. Z h a n g. A Content-Based Recommendation Algorithm for
Learning Resources. – Multimed. Syst., Vol. 24, 2018, No 2, pp. 163-173.
3. F u, M., H. Q u, D. M o g e s, L. L u. Attention Based Collaborative Filtering. Neurocomputing,
Vol. 311, 2018, pp. 88-98.
4. W a n g, F., S. Lin, X. Luo, H. W u, R. W a n g, F. Z h o u. A Data-Driven Approach for Sketch-
Based 3D Shape Retrieval via Similar Drawing-Style Recommendation. Comput. Graph.
Forum, Vol. 36, October 2017, No 7, pp. 157-166.
5. Chu, P. M., S. J. Lee. A Novel Recommender System for e-Commerce. In: Proc. of 10th
International Congress on Image and Signal Processing, BioMedical Engineering and
Informatics (CISP-BMEI’17), 2017, pp. 1-5.
6. Z h a o, Q., C. W a n g, P. W a n g, M. Z h o u, C. J i a n g. A Novel Method on Information
Recommendation via Hybrid Similarity. IEEE Trans. Syst. Man, Cybern. Syst., Vol. 48,
2018, No 3, pp. 448-459.
7. H a r i a d i, I., D. N u r j a n a h. Hybrid Attribute and Personality Based Recommender System for
Book Recommendation. – In: Proc. of International Conference on Data and Software
Engineering (ICoDSE’17), 2017, pp. 1-5.
8. Y a n g, S.-B., S.-H. S h i n, Y. J o u n, C. Koo. Exploring the Comparative Importance of Online
Hotel Reviews’ Heuristic Attributes in Review Helpfulness: A Conjoint Analysis Approach. –
J. Travel Tour. Mark., Vol. 34, September 2017, No 7, pp. 963-985.
9. W a n g, H., K. Guo. The Impact of Online Reviews on Exhibitor Behaviour: Evidence from Movie
Industry. – Enterp. Inf. Syst., Vol. 11, November 2017, No 10, pp. 1518-1534.
10. C l a r i z i a, F., F. C o l a c e, M. L o m b a r d i, F. P a s c a l e. A Context Aware Recommender
System for Digital Storytelling. In: Proc. of IEEE 32nd International Conference on
Advanced Information Networking and Applications (AINA’18), 2018, pp. 542-549.
11. B o f f a, S., C. D. M a i o, B. G e r l a, M. P a r e n t e. Context-Aware Advertisment
Recommendation on Twitter through Rough Sets. In: Proc. of IEEE International Conference
on Fuzzy Systems (FUZZ-IEEE’18), 2018, pp. 1-8.
12. Y a n g, Q. A Novel Recommendation System Based on Semantics and Context Awareness.
Computing, Vol. 100, 2018, No 8, pp. 809-823.
13. D i x i t, V. S., P. J a i n. A Proposed Framework for Recommendations Aggregation in Context
Aware Recommender Systems. In: Proc. of 8th International Conference on Cloud
Computing, Data Science & Engineering (Confluence), 2018, pp. 209-214.
14. C o l o m b o-M e n d o z a, L. O., R. V a l e n c i a-G a r c í a, A. R o d r í g u e z-G o n z á l e z,
G. A l o r-H e r n á n d e z, J. J. S a m p e r-Z a p a t e r. RecomMetz: A Context-Aware
Knowledge-Based Mobile Recommender System for Movie Showtimes. – Expert Syst. Appl.,
Vol. 42, 2015, No 3, pp. 1202-1222.
15. A b b a s, M., M. U. R i a z, A. R a u f, M. T. K h a n, S. K h a l i d. Context-Aware Youtube
Recommender System. In: Proc. of International Conference on Information and
Communication Technologies (ICICT’17), 2017, pp. 161-164.
16. Cai, G., W. G u. Heterogeneous Context-Aware Recommendation Algorithm with Semi-
Supervised Tensor Factorization BT. – Intelligent Data Engineering and Automated Learning
– IDEAL’17, 2017, pp. 232-241.
17. Kim, D., C. P a r k, J. O h, S. Lee, H. Y u. Convolutional Matrix Factorization for Document
Context-Aware Recommendation. In: Proc. of 10th ACM Conf. Recomm. Syst. (RecSys’16),
2016, pp. 233-240.
18. Y u, P., L. Lin, J. W a n g. A Novel Framework to Alleviate the Sparsity Problem in Context-Aware
Recommender Systems. New Rev. Hypermedia Multimed., Vol. 23, April 2017, No 2,
pp. 141-158.
19. D e n g, S., D. W a n g, X. L i, G. X u. Exploring User Emotion in Microblogs for Music
Recommendation. – Expert Syst. Appl., Vol. 42, 2015, No 23, pp. 9284-9293.
115
20. Z h a o, Z., et al. Social-Aware Movie Recommendation via Multimodal Network Learning. – IEEE
Trans. Multimed., Vol. 20, 2018, No 2, pp. 430-440.
21. Z h a n g, Y., R. Jin, Z.-H. Z h o u. Understanding Bag-of-Words Model: A Statistical Framework.
– Int. J. Mach. Learn. Cybern., Vol. 1, 2010, No 1, pp. 43-52.
22. P. Ekman, R. J. Davidson, Eds. The Nature of Emotion: Fundamental Questions. New York, NY,
US, Oxford University Press, 1994.
23. N o s s h i, A., A. A s e m, M. B. S e n o u s y. Hybrid Recommender System Using Emotional
Fingerprints Model. – Int. J. Inf. Retr. Res., Vol. 9, 2019, No 3, p. Article 4 (in Press).
24. K o h a v i, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model
Selection. – In: Proc. of 14th International Joint Conference on Artificial Intelligence, Vol. 2,
1995, pp. 1137-1143.
25. Y u a n, T., H. W u, J. Zhu, L. S h e n, G. Q i a n. MS-UCF: A Reliable Recommendation Method
Based on Mood-Sensitivity Identification and User Credit. In: Proc. of International
Conference on Information Management and Processing (ICIMP18), 2018, pp. 16-20.
26. W a n g, H.-C., H.-T. J h o u, Y.-S. T s a i. Adapting Topic Map and Social Influence to the
Personalized Hybrid Recommender System. – Inf. Sci., NY, 2018.
27. Lim, H., H.-J. Kim. Item Recommendation Using Tag Emotion in Social Cataloging Services.
Expert Syst. Appl., Vol. 89, 2017, pp. 179-187.
Received: 29.01.2019; Accepted: 14.02.2019 (fast track)