경영경제대학 응용통계학과

  • 통계학과
  • 대학원
  • 사람들
  • 학생회
  • 커뮤니티

커뮤니티

  • 공지사항공지사항
  • 자유게시판자유게시판
  • 사진갤러리사진갤러리
  • 학과자료실학과자료실
  • 동문게시판동문게시판
  • 교환학생교환학생

HOME | 커뮤니티 | 공지사항

공지사항

제목 [공지사항]2012년 4월 16일 통계학과와 데이터과학연구소 주관 통계학술 세미나를 개최합니다.
작성자 이종은 날짜 12.04.09 조회 6666

통계학과 세미나 공고


▣ 세미나 1

√ 일  시 :
2012년 4월 16일 (월) 오후 4:00 ~ 4:50
√ 발표자 :
김명민 교수 (University at Buffalo, Dept. of Biostatistics)
√ 장  소 :
중앙대학교 법학관 지하 1층 정보화 센터 1실
√ 주  제 :
A Progressive Block Empirical Likelihood Method for Time Series


▣ 세미나 2

√ 일  시 :
2012년 4월 16일 (월) 오후 5:00 ~ 5:50
√ 발표자 :
최호식 교수(호서대학교 정보통계학과)
√ 장  소 :
중앙대학교 법학관 지하 1층 정보화 센터 1실
√ 주  제 :
Some computational algorithms in sparse supervised learning



Abstract


[세미나 1]

This paper develops a new blockwise empirical likelihood (BEL) method for stationary, weakly dependent time processes, called the progressive block empirical likelihood (PBEL). In contrast to the standard version of BEL, which uses data blocks of constant length for a given sample size and whose performance can depend crucially on the block length selection, this new approach involves data blocking scheme where blocks increase in length by an arithmetic progression. Consequently, no block length selections are required for the PBEL method, which implies a certain type of robustness for this version of BEL. For inference of smooth functions of the process mean, theoretical results establish the chi-square limit of the log-likelihood ratio based on PBEL, which can be used to calibrate confidence regions. Simulation evidence indicates that the method can perform comparably to the standard BEL in coverage accuracy(when the latter uses a “good" block choice) and can exhibit more stability, all without the need to select a block length.

Keywords: Arithmetic progression; Block bootstrap; Stationarity; Weak Dependence

[세미나 2]

Variable selection is a fundamental task for high-dimensional statistical supervised learning problems. Traditional approaches follow stepwise and subset selection procedures, which are computationally intensive, unstable, and difficult to draw sampling properties. Alternative variable selection methods are sparse penalized approaches, including bridge regression (Frank and Friedman 1993), least absolute shrinkage and selection operator (LASSO; Tibshirani, 1996), the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the minimax concave penalty(Zhang, 2010). In high-dimensional learning via penalized approaches, a regularization requires algorithm to have lighter computational complexity. So, implementations in practice must be achieved efficiently. For the purpose, an entire solution path-following algorithm is sufficiently fast and stable to analyze high-dimensional data. In this talk, some algorithms for sparse supervised learning problems including regression, classification, quantile regression and inverse covariance estimation are considered. First, I will present the Lars (Least angle regression) algorithm (Efron, 2003) briefly and then follow-up algorithms for various supervised learning problems. I will also show some recent works extending to non-convex optimization problems.





Department of Statistics &
The Research Center for Data Science
댓글(0)
입력